00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1913 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3174 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.140 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.141 The recommended git tool is: git 00:00:00.141 using credential 00000000-0000-0000-0000-000000000002 00:00:00.142 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.193 Fetching changes from the remote Git repository 00:00:00.197 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.255 Using shallow fetch with depth 1 00:00:00.255 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.255 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.316 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.316 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.710 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.720 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.731 Checking out Revision bdda68d1e41499f94b336830106e36e3602574f3 (FETCH_HEAD) 00:00:05.731 > git config core.sparsecheckout # timeout=10 00:00:05.741 > git read-tree -mu HEAD # timeout=10 00:00:05.757 > git checkout -f bdda68d1e41499f94b336830106e36e3602574f3 # timeout=5 00:00:05.779 Commit message: "jenkins/jjb-config: Make sure proxies are set for pkgdep jobs" 00:00:05.779 > git rev-list --no-walk d763a45cd581fc315bd89c929406ef8de2500459 # timeout=10 00:00:05.859 [Pipeline] Start of Pipeline 00:00:05.869 [Pipeline] library 00:00:05.870 Loading library shm_lib@master 00:00:05.870 Library shm_lib@master is cached. Copying from home. 00:00:05.882 [Pipeline] node 00:00:05.896 Running on VM-host-SM4 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:05.897 [Pipeline] { 00:00:05.908 [Pipeline] catchError 00:00:05.910 [Pipeline] { 00:00:05.921 [Pipeline] wrap 00:00:05.927 [Pipeline] { 00:00:05.932 [Pipeline] stage 00:00:05.934 [Pipeline] { (Prologue) 00:00:05.946 [Pipeline] echo 00:00:05.947 Node: VM-host-SM4 00:00:05.951 [Pipeline] cleanWs 00:00:05.980 [WS-CLEANUP] Deleting project workspace... 00:00:05.980 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.016 [WS-CLEANUP] done 00:00:06.178 [Pipeline] setCustomBuildProperty 00:00:06.225 [Pipeline] nodesByLabel 00:00:06.226 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.234 [Pipeline] httpRequest 00:00:06.238 HttpMethod: GET 00:00:06.239 URL: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:06.239 Sending request to url: http://10.211.164.101/packages/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:06.255 Response Code: HTTP/1.1 200 OK 00:00:06.256 Success: Status code 200 is in the accepted range: 200,404 00:00:06.256 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:08.608 [Pipeline] sh 00:00:08.889 + tar --no-same-owner -xf jbp_bdda68d1e41499f94b336830106e36e3602574f3.tar.gz 00:00:08.907 [Pipeline] httpRequest 00:00:08.911 HttpMethod: GET 00:00:08.912 URL: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.912 Sending request to url: http://10.211.164.101/packages/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:08.927 Response Code: HTTP/1.1 200 OK 00:00:08.928 Success: Status code 200 is in the accepted range: 200,404 00:00:08.928 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:33.882 [Pipeline] sh 00:00:34.163 + tar --no-same-owner -xf spdk_130b9406a1d197d63453b42652430be9d1b0727e.tar.gz 00:00:36.708 [Pipeline] sh 00:00:36.994 + git -C spdk log --oneline -n5 00:00:36.994 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:00:36.994 5d3fd6726 bdev: Fix a race bug between unregistration and QoS poller 00:00:36.994 fbc673ece test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:00:36.994 3651466d0 test/scheduler: Calculate median of the cpu load samples 00:00:36.994 a7414547f test/scheduler: Make sure stderr is not O_TRUNCated in move_proc() 00:00:37.009 [Pipeline] writeFile 00:00:37.020 [Pipeline] sh 00:00:37.301 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:37.312 [Pipeline] sh 00:00:37.593 + cat autorun-spdk.conf 00:00:37.593 SPDK_TEST_UNITTEST=1 00:00:37.593 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.593 SPDK_TEST_NVME=1 00:00:37.593 SPDK_TEST_BLOCKDEV=1 00:00:37.593 SPDK_RUN_ASAN=1 00:00:37.593 SPDK_RUN_UBSAN=1 00:00:37.593 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.599 RUN_NIGHTLY=1 00:00:37.601 [Pipeline] } 00:00:37.617 [Pipeline] // stage 00:00:37.629 [Pipeline] stage 00:00:37.631 [Pipeline] { (Run VM) 00:00:37.643 [Pipeline] sh 00:00:37.924 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:37.924 + echo 'Start stage prepare_nvme.sh' 00:00:37.924 Start stage prepare_nvme.sh 00:00:37.924 + [[ -n 9 ]] 00:00:37.924 + disk_prefix=ex9 00:00:37.924 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:00:37.924 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:00:37.924 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:00:37.924 ++ SPDK_TEST_UNITTEST=1 00:00:37.924 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.924 ++ SPDK_TEST_NVME=1 00:00:37.924 ++ SPDK_TEST_BLOCKDEV=1 00:00:37.924 ++ SPDK_RUN_ASAN=1 00:00:37.924 ++ SPDK_RUN_UBSAN=1 00:00:37.924 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.924 ++ RUN_NIGHTLY=1 00:00:37.924 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:37.924 + nvme_files=() 00:00:37.924 + declare -A nvme_files 00:00:37.924 + backend_dir=/var/lib/libvirt/images/backends 00:00:37.924 + nvme_files['nvme.img']=5G 00:00:37.924 + nvme_files['nvme-cmb.img']=5G 00:00:37.924 + nvme_files['nvme-multi0.img']=4G 00:00:37.924 + nvme_files['nvme-multi1.img']=4G 00:00:37.924 + nvme_files['nvme-multi2.img']=4G 00:00:37.924 + nvme_files['nvme-openstack.img']=8G 00:00:37.924 + nvme_files['nvme-zns.img']=5G 00:00:37.924 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:37.924 + (( SPDK_TEST_FTL == 1 )) 00:00:37.924 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:37.924 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:37.924 + for nvme in "${!nvme_files[@]}" 00:00:37.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:00:37.924 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:37.924 + for nvme in "${!nvme_files[@]}" 00:00:37.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:00:37.924 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.924 + for nvme in "${!nvme_files[@]}" 00:00:37.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:00:37.924 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:37.924 + for nvme in "${!nvme_files[@]}" 00:00:37.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:00:37.924 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:37.924 + for nvme in "${!nvme_files[@]}" 00:00:37.924 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:00:37.924 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.184 + for nvme in "${!nvme_files[@]}" 00:00:38.184 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:00:38.184 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.184 + for nvme in "${!nvme_files[@]}" 00:00:38.184 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:00:38.184 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.184 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:00:38.184 + echo 'End stage prepare_nvme.sh' 00:00:38.184 End stage prepare_nvme.sh 00:00:38.195 [Pipeline] sh 00:00:38.476 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:38.476 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme.img -H -a -v -f ubuntu2204 00:00:38.476 00:00:38.476 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:00:38.476 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:00:38.476 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:00:38.476 HELP=0 00:00:38.476 DRY_RUN=0 00:00:38.476 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme.img, 00:00:38.476 NVME_DISKS_TYPE=nvme, 00:00:38.476 NVME_AUTO_CREATE=0 00:00:38.476 NVME_DISKS_NAMESPACES=, 00:00:38.476 NVME_CMB=, 00:00:38.476 NVME_PMR=, 00:00:38.476 NVME_ZNS=, 00:00:38.476 NVME_MS=, 00:00:38.476 NVME_FDP=, 00:00:38.476 SPDK_VAGRANT_DISTRO=ubuntu2204 00:00:38.476 SPDK_VAGRANT_VMCPU=10 00:00:38.476 SPDK_VAGRANT_VMRAM=12288 00:00:38.476 SPDK_VAGRANT_PROVIDER=libvirt 00:00:38.476 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:38.476 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:38.476 SPDK_OPENSTACK_NETWORK=0 00:00:38.476 VAGRANT_PACKAGE_BOX=0 00:00:38.476 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:38.476 FORCE_DISTRO=true 00:00:38.476 VAGRANT_BOX_VERSION= 00:00:38.476 EXTRA_VAGRANTFILES= 00:00:38.476 NIC_MODEL=e1000 00:00:38.476 00:00:38.476 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:00:38.476 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:41.010 Bringing machine 'default' up with 'libvirt' provider... 00:00:41.578 ==> default: Creating image (snapshot of base box volume). 00:00:41.836 ==> default: Creating domain with the following settings... 00:00:41.836 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1718085191_8e906a0431d72a65af61 00:00:41.836 ==> default: -- Domain type: kvm 00:00:41.836 ==> default: -- Cpus: 10 00:00:41.836 ==> default: -- Feature: acpi 00:00:41.836 ==> default: -- Feature: apic 00:00:41.836 ==> default: -- Feature: pae 00:00:41.836 ==> default: -- Memory: 12288M 00:00:41.836 ==> default: -- Memory Backing: hugepages: 00:00:41.836 ==> default: -- Management MAC: 00:00:41.836 ==> default: -- Loader: 00:00:41.836 ==> default: -- Nvram: 00:00:41.836 ==> default: -- Base box: spdk/ubuntu2204 00:00:41.836 ==> default: -- Storage pool: default 00:00:41.836 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1718085191_8e906a0431d72a65af61.img (20G) 00:00:41.836 ==> default: -- Volume Cache: default 00:00:41.836 ==> default: -- Kernel: 00:00:41.836 ==> default: -- Initrd: 00:00:41.836 ==> default: -- Graphics Type: vnc 00:00:41.836 ==> default: -- Graphics Port: -1 00:00:41.836 ==> default: -- Graphics IP: 127.0.0.1 00:00:41.836 ==> default: -- Graphics Password: Not defined 00:00:41.836 ==> default: -- Video Type: cirrus 00:00:41.836 ==> default: -- Video VRAM: 9216 00:00:41.836 ==> default: -- Sound Type: 00:00:41.836 ==> default: -- Keymap: en-us 00:00:41.836 ==> default: -- TPM Path: 00:00:41.836 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:41.836 ==> default: -- Command line args: 00:00:41.836 ==> default: -> value=-device, 00:00:41.836 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:41.836 ==> default: -> value=-drive, 00:00:41.836 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-0-drive0, 00:00:41.836 ==> default: -> value=-device, 00:00:41.836 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:41.836 ==> default: Creating shared folders metadata... 00:00:41.836 ==> default: Starting domain. 00:00:43.737 ==> default: Waiting for domain to get an IP address... 00:00:53.767 ==> default: Waiting for SSH to become available... 00:00:56.351 ==> default: Configuring and enabling network interfaces... 00:01:01.643 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:05.833 ==> default: Mounting SSHFS shared folder... 00:01:07.204 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:07.204 ==> default: Checking Mount.. 00:01:07.803 ==> default: Folder Successfully Mounted! 00:01:07.803 ==> default: Running provisioner: file... 00:01:08.367 default: ~/.gitconfig => .gitconfig 00:01:08.624 00:01:08.624 SUCCESS! 00:01:08.624 00:01:08.624 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:08.624 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:08.624 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:08.624 00:01:08.634 [Pipeline] } 00:01:08.652 [Pipeline] // stage 00:01:08.663 [Pipeline] dir 00:01:08.664 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:08.666 [Pipeline] { 00:01:08.682 [Pipeline] catchError 00:01:08.684 [Pipeline] { 00:01:08.700 [Pipeline] sh 00:01:09.045 + vagrant ssh-config --host vagrant 00:01:09.045 + sed -ne /^Host/,$p 00:01:09.045 + tee ssh_conf 00:01:12.337 Host vagrant 00:01:12.337 HostName 192.168.121.3 00:01:12.337 User vagrant 00:01:12.337 Port 22 00:01:12.337 UserKnownHostsFile /dev/null 00:01:12.337 StrictHostKeyChecking no 00:01:12.337 PasswordAuthentication no 00:01:12.337 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:12.337 IdentitiesOnly yes 00:01:12.337 LogLevel FATAL 00:01:12.337 ForwardAgent yes 00:01:12.337 ForwardX11 yes 00:01:12.337 00:01:12.349 [Pipeline] withEnv 00:01:12.351 [Pipeline] { 00:01:12.365 [Pipeline] sh 00:01:12.644 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:12.644 source /etc/os-release 00:01:12.644 [[ -e /image.version ]] && img=$(< /image.version) 00:01:12.644 # Minimal, systemd-like check. 00:01:12.644 if [[ -e /.dockerenv ]]; then 00:01:12.644 # Clear garbage from the node's name: 00:01:12.644 # agt-er_autotest_547-896 -> autotest_547-896 00:01:12.644 # $HOSTNAME is the actual container id 00:01:12.644 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:12.644 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:12.644 # We can assume this is a mount from a host where container is running, 00:01:12.644 # so fetch its hostname to easily identify the target swarm worker. 00:01:12.644 container="$(< /etc/hostname) ($agent)" 00:01:12.644 else 00:01:12.644 # Fallback 00:01:12.644 container=$agent 00:01:12.644 fi 00:01:12.644 fi 00:01:12.644 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:12.644 00:01:12.913 [Pipeline] } 00:01:12.932 [Pipeline] // withEnv 00:01:12.940 [Pipeline] setCustomBuildProperty 00:01:12.955 [Pipeline] stage 00:01:12.957 [Pipeline] { (Tests) 00:01:12.975 [Pipeline] sh 00:01:13.257 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:13.529 [Pipeline] sh 00:01:13.807 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:14.080 [Pipeline] timeout 00:01:14.081 Timeout set to expire in 1 hr 30 min 00:01:14.082 [Pipeline] { 00:01:14.098 [Pipeline] sh 00:01:14.379 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:14.948 HEAD is now at 130b9406a test/nvmf: replace rpc_cmd() with direct invocation of rpc.py due to inherently larger timeout 00:01:14.961 [Pipeline] sh 00:01:15.243 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:15.515 [Pipeline] sh 00:01:15.795 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:16.071 [Pipeline] sh 00:01:16.374 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:16.634 ++ readlink -f spdk_repo 00:01:16.634 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:16.634 + [[ -n /home/vagrant/spdk_repo ]] 00:01:16.634 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:16.634 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:16.634 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:16.634 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:16.634 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:16.634 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:16.634 + cd /home/vagrant/spdk_repo 00:01:16.634 + source /etc/os-release 00:01:16.634 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:16.634 ++ NAME=Ubuntu 00:01:16.634 ++ VERSION_ID=22.04 00:01:16.634 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:16.634 ++ VERSION_CODENAME=jammy 00:01:16.634 ++ ID=ubuntu 00:01:16.634 ++ ID_LIKE=debian 00:01:16.634 ++ HOME_URL=https://www.ubuntu.com/ 00:01:16.634 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:16.634 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:16.634 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:16.634 ++ UBUNTU_CODENAME=jammy 00:01:16.634 + uname -a 00:01:16.634 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:16.634 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:16.634 Hugepages 00:01:16.634 node hugesize free / total 00:01:16.634 node0 1048576kB 0 / 0 00:01:16.634 node0 2048kB 0 / 0 00:01:16.634 00:01:16.634 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.893 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:16.893 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:16.893 + rm -f /tmp/spdk-ld-path 00:01:16.893 + source autorun-spdk.conf 00:01:16.893 ++ SPDK_TEST_UNITTEST=1 00:01:16.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.893 ++ SPDK_TEST_NVME=1 00:01:16.893 ++ SPDK_TEST_BLOCKDEV=1 00:01:16.893 ++ SPDK_RUN_ASAN=1 00:01:16.893 ++ SPDK_RUN_UBSAN=1 00:01:16.893 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.893 ++ RUN_NIGHTLY=1 00:01:16.893 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.893 + [[ -n '' ]] 00:01:16.893 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:16.893 + for M in /var/spdk/build-*-manifest.txt 00:01:16.893 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.893 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.893 + for M in /var/spdk/build-*-manifest.txt 00:01:16.893 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.893 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.893 ++ uname 00:01:16.893 + [[ Linux == \L\i\n\u\x ]] 00:01:16.893 + sudo dmesg -T 00:01:16.893 + sudo dmesg --clear 00:01:16.893 + dmesg_pid=2112 00:01:16.893 + [[ Ubuntu == FreeBSD ]] 00:01:16.893 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.893 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.893 + sudo dmesg -Tw 00:01:16.893 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.893 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.893 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.893 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.893 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.893 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:16.893 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:16.893 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:16.893 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.893 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.893 Test configuration: 00:01:16.893 SPDK_TEST_UNITTEST=1 00:01:16.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.893 SPDK_TEST_NVME=1 00:01:16.893 SPDK_TEST_BLOCKDEV=1 00:01:16.893 SPDK_RUN_ASAN=1 00:01:16.893 SPDK_RUN_UBSAN=1 00:01:16.893 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:17.152 RUN_NIGHTLY=1 05:53:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:17.152 05:53:47 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:17.152 05:53:47 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:17.152 05:53:47 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:17.152 05:53:47 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:17.152 05:53:47 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:17.152 05:53:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:17.152 05:53:47 -- paths/export.sh@5 -- $ export PATH 00:01:17.152 05:53:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:17.152 05:53:47 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:17.152 05:53:47 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:17.152 05:53:47 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718085227.XXXXXX 00:01:17.152 05:53:47 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718085227.3diJc4 00:01:17.152 05:53:47 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:17.152 05:53:47 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:17.152 05:53:47 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:17.152 05:53:47 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:17.152 05:53:47 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:17.153 05:53:47 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:17.153 05:53:47 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:17.153 05:53:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.153 05:53:47 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage' 00:01:17.153 05:53:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.153 05:53:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.153 05:53:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.153 05:53:47 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.153 Tue Jun 11 05:53:47 UTC 2024 00:01:17.153 05:53:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.153 LTS-43-g130b9406a 00:01:17.153 05:53:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:17.153 05:53:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:17.153 05:53:47 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:17.153 05:53:47 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:17.153 05:53:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.153 ************************************ 00:01:17.153 START TEST asan 00:01:17.153 ************************************ 00:01:17.153 05:53:47 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:17.153 using asan 00:01:17.153 00:01:17.153 real 0m0.000s 00:01:17.153 user 0m0.000s 00:01:17.153 sys 0m0.000s 00:01:17.153 05:53:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.153 05:53:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.153 ************************************ 00:01:17.153 END TEST asan 00:01:17.153 ************************************ 00:01:17.153 05:53:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.153 05:53:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.153 05:53:47 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:17.153 05:53:47 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:17.153 05:53:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.153 ************************************ 00:01:17.153 START TEST ubsan 00:01:17.153 ************************************ 00:01:17.153 using ubsan 00:01:17.153 05:53:47 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:17.153 00:01:17.153 real 0m0.000s 00:01:17.153 user 0m0.000s 00:01:17.153 sys 0m0.000s 00:01:17.153 05:53:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.153 ************************************ 00:01:17.153 END TEST ubsan 00:01:17.153 ************************************ 00:01:17.153 05:53:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.153 05:53:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.153 05:53:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.153 05:53:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.153 05:53:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.153 05:53:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.153 05:53:47 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:17.153 05:53:47 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:17.153 05:53:47 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:17.153 05:53:47 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:17.153 05:53:47 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:17.153 05:53:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.153 ************************************ 00:01:17.153 START TEST unittest_build 00:01:17.153 ************************************ 00:01:17.153 05:53:47 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:17.153 05:53:47 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --without-shared 00:01:17.411 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:17.411 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:17.978 Using 'verbs' RDMA provider 00:01:36.695 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:48.901 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:49.159 Creating mk/config.mk...done. 00:01:49.159 Creating mk/cc.flags.mk...done. 00:01:49.159 Type 'make' to build. 00:01:49.159 05:54:19 -- common/autobuild_common.sh@403 -- $ make -j10 00:01:49.418 make[1]: Nothing to be done for 'all'. 00:02:04.291 The Meson build system 00:02:04.291 Version: 1.4.0 00:02:04.291 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:04.291 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:04.291 Build type: native build 00:02:04.291 Program cat found: YES (/usr/bin/cat) 00:02:04.291 Project name: DPDK 00:02:04.291 Project version: 23.11.0 00:02:04.291 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:02:04.291 C linker for the host machine: cc ld.bfd 2.38 00:02:04.291 Host machine cpu family: x86_64 00:02:04.291 Host machine cpu: x86_64 00:02:04.291 Message: ## Building in Developer Mode ## 00:02:04.291 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:04.291 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:04.291 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:04.291 Program python3 found: YES (/usr/bin/python3) 00:02:04.291 Program cat found: YES (/usr/bin/cat) 00:02:04.291 Compiler for C supports arguments -march=native: YES 00:02:04.291 Checking for size of "void *" : 8 00:02:04.291 Checking for size of "void *" : 8 (cached) 00:02:04.291 Library m found: YES 00:02:04.291 Library numa found: YES 00:02:04.291 Has header "numaif.h" : YES 00:02:04.291 Library fdt found: NO 00:02:04.291 Library execinfo found: NO 00:02:04.291 Has header "execinfo.h" : YES 00:02:04.291 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:02:04.291 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:04.291 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:04.291 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:04.291 Run-time dependency openssl found: YES 3.0.2 00:02:04.291 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:04.291 Library pcap found: NO 00:02:04.291 Compiler for C supports arguments -Wcast-qual: YES 00:02:04.291 Compiler for C supports arguments -Wdeprecated: YES 00:02:04.291 Compiler for C supports arguments -Wformat: YES 00:02:04.291 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:04.291 Compiler for C supports arguments -Wformat-security: YES 00:02:04.291 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:04.291 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:04.291 Compiler for C supports arguments -Wnested-externs: YES 00:02:04.291 Compiler for C supports arguments -Wold-style-definition: YES 00:02:04.291 Compiler for C supports arguments -Wpointer-arith: YES 00:02:04.291 Compiler for C supports arguments -Wsign-compare: YES 00:02:04.291 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:04.291 Compiler for C supports arguments -Wundef: YES 00:02:04.291 Compiler for C supports arguments -Wwrite-strings: YES 00:02:04.291 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:04.291 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:04.291 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:04.291 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:04.291 Program objdump found: YES (/usr/bin/objdump) 00:02:04.291 Compiler for C supports arguments -mavx512f: YES 00:02:04.292 Checking if "AVX512 checking" compiles: YES 00:02:04.292 Fetching value of define "__SSE4_2__" : 1 00:02:04.292 Fetching value of define "__AES__" : 1 00:02:04.292 Fetching value of define "__AVX__" : 1 00:02:04.292 Fetching value of define "__AVX2__" : 1 00:02:04.292 Fetching value of define "__AVX512BW__" : 1 00:02:04.292 Fetching value of define "__AVX512CD__" : 1 00:02:04.292 Fetching value of define "__AVX512DQ__" : 1 00:02:04.292 Fetching value of define "__AVX512F__" : 1 00:02:04.292 Fetching value of define "__AVX512VL__" : 1 00:02:04.292 Fetching value of define "__PCLMUL__" : 1 00:02:04.292 Fetching value of define "__RDRND__" : 1 00:02:04.292 Fetching value of define "__RDSEED__" : 1 00:02:04.292 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:04.292 Fetching value of define "__znver1__" : (undefined) 00:02:04.292 Fetching value of define "__znver2__" : (undefined) 00:02:04.292 Fetching value of define "__znver3__" : (undefined) 00:02:04.292 Fetching value of define "__znver4__" : (undefined) 00:02:04.292 Library asan found: YES 00:02:04.292 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:04.292 Message: lib/log: Defining dependency "log" 00:02:04.292 Message: lib/kvargs: Defining dependency "kvargs" 00:02:04.292 Message: lib/telemetry: Defining dependency "telemetry" 00:02:04.292 Library rt found: YES 00:02:04.292 Checking for function "getentropy" : NO 00:02:04.292 Message: lib/eal: Defining dependency "eal" 00:02:04.292 Message: lib/ring: Defining dependency "ring" 00:02:04.292 Message: lib/rcu: Defining dependency "rcu" 00:02:04.292 Message: lib/mempool: Defining dependency "mempool" 00:02:04.292 Message: lib/mbuf: Defining dependency "mbuf" 00:02:04.292 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:04.292 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:04.292 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:04.292 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:04.292 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:04.292 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:04.292 Compiler for C supports arguments -mpclmul: YES 00:02:04.292 Compiler for C supports arguments -maes: YES 00:02:04.292 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:04.292 Compiler for C supports arguments -mavx512bw: YES 00:02:04.292 Compiler for C supports arguments -mavx512dq: YES 00:02:04.292 Compiler for C supports arguments -mavx512vl: YES 00:02:04.292 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:04.292 Compiler for C supports arguments -mavx2: YES 00:02:04.292 Compiler for C supports arguments -mavx: YES 00:02:04.292 Message: lib/net: Defining dependency "net" 00:02:04.292 Message: lib/meter: Defining dependency "meter" 00:02:04.292 Message: lib/ethdev: Defining dependency "ethdev" 00:02:04.292 Message: lib/pci: Defining dependency "pci" 00:02:04.292 Message: lib/cmdline: Defining dependency "cmdline" 00:02:04.292 Message: lib/hash: Defining dependency "hash" 00:02:04.292 Message: lib/timer: Defining dependency "timer" 00:02:04.292 Message: lib/compressdev: Defining dependency "compressdev" 00:02:04.292 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:04.292 Message: lib/dmadev: Defining dependency "dmadev" 00:02:04.292 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:04.292 Message: lib/power: Defining dependency "power" 00:02:04.292 Message: lib/reorder: Defining dependency "reorder" 00:02:04.292 Message: lib/security: Defining dependency "security" 00:02:04.292 Has header "linux/userfaultfd.h" : YES 00:02:04.292 Has header "linux/vduse.h" : YES 00:02:04.292 Message: lib/vhost: Defining dependency "vhost" 00:02:04.292 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:04.292 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:04.292 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:04.292 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:04.292 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:04.292 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:04.292 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:04.292 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:04.292 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:04.292 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:04.292 Program doxygen found: YES (/usr/bin/doxygen) 00:02:04.292 Configuring doxy-api-html.conf using configuration 00:02:04.292 Configuring doxy-api-man.conf using configuration 00:02:04.292 Program mandb found: YES (/usr/bin/mandb) 00:02:04.292 Program sphinx-build found: NO 00:02:04.292 Configuring rte_build_config.h using configuration 00:02:04.292 Message: 00:02:04.292 ================= 00:02:04.292 Applications Enabled 00:02:04.292 ================= 00:02:04.292 00:02:04.292 apps: 00:02:04.292 00:02:04.292 00:02:04.292 Message: 00:02:04.292 ================= 00:02:04.292 Libraries Enabled 00:02:04.292 ================= 00:02:04.292 00:02:04.292 libs: 00:02:04.292 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:04.292 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:04.292 cryptodev, dmadev, power, reorder, security, vhost, 00:02:04.292 00:02:04.292 Message: 00:02:04.292 =============== 00:02:04.292 Drivers Enabled 00:02:04.292 =============== 00:02:04.292 00:02:04.292 common: 00:02:04.292 00:02:04.292 bus: 00:02:04.292 pci, vdev, 00:02:04.292 mempool: 00:02:04.292 ring, 00:02:04.292 dma: 00:02:04.292 00:02:04.292 net: 00:02:04.292 00:02:04.292 crypto: 00:02:04.292 00:02:04.292 compress: 00:02:04.292 00:02:04.292 vdpa: 00:02:04.292 00:02:04.292 00:02:04.292 Message: 00:02:04.292 ================= 00:02:04.292 Content Skipped 00:02:04.292 ================= 00:02:04.292 00:02:04.292 apps: 00:02:04.292 dumpcap: explicitly disabled via build config 00:02:04.292 graph: explicitly disabled via build config 00:02:04.292 pdump: explicitly disabled via build config 00:02:04.292 proc-info: explicitly disabled via build config 00:02:04.292 test-acl: explicitly disabled via build config 00:02:04.292 test-bbdev: explicitly disabled via build config 00:02:04.292 test-cmdline: explicitly disabled via build config 00:02:04.292 test-compress-perf: explicitly disabled via build config 00:02:04.292 test-crypto-perf: explicitly disabled via build config 00:02:04.292 test-dma-perf: explicitly disabled via build config 00:02:04.292 test-eventdev: explicitly disabled via build config 00:02:04.292 test-fib: explicitly disabled via build config 00:02:04.292 test-flow-perf: explicitly disabled via build config 00:02:04.292 test-gpudev: explicitly disabled via build config 00:02:04.292 test-mldev: explicitly disabled via build config 00:02:04.292 test-pipeline: explicitly disabled via build config 00:02:04.292 test-pmd: explicitly disabled via build config 00:02:04.292 test-regex: explicitly disabled via build config 00:02:04.292 test-sad: explicitly disabled via build config 00:02:04.292 test-security-perf: explicitly disabled via build config 00:02:04.292 00:02:04.292 libs: 00:02:04.292 metrics: explicitly disabled via build config 00:02:04.292 acl: explicitly disabled via build config 00:02:04.292 bbdev: explicitly disabled via build config 00:02:04.292 bitratestats: explicitly disabled via build config 00:02:04.292 bpf: explicitly disabled via build config 00:02:04.292 cfgfile: explicitly disabled via build config 00:02:04.292 distributor: explicitly disabled via build config 00:02:04.292 efd: explicitly disabled via build config 00:02:04.292 eventdev: explicitly disabled via build config 00:02:04.292 dispatcher: explicitly disabled via build config 00:02:04.292 gpudev: explicitly disabled via build config 00:02:04.292 gro: explicitly disabled via build config 00:02:04.292 gso: explicitly disabled via build config 00:02:04.292 ip_frag: explicitly disabled via build config 00:02:04.292 jobstats: explicitly disabled via build config 00:02:04.292 latencystats: explicitly disabled via build config 00:02:04.292 lpm: explicitly disabled via build config 00:02:04.292 member: explicitly disabled via build config 00:02:04.292 pcapng: explicitly disabled via build config 00:02:04.292 rawdev: explicitly disabled via build config 00:02:04.292 regexdev: explicitly disabled via build config 00:02:04.292 mldev: explicitly disabled via build config 00:02:04.292 rib: explicitly disabled via build config 00:02:04.292 sched: explicitly disabled via build config 00:02:04.292 stack: explicitly disabled via build config 00:02:04.292 ipsec: explicitly disabled via build config 00:02:04.292 pdcp: explicitly disabled via build config 00:02:04.292 fib: explicitly disabled via build config 00:02:04.292 port: explicitly disabled via build config 00:02:04.292 pdump: explicitly disabled via build config 00:02:04.292 table: explicitly disabled via build config 00:02:04.292 pipeline: explicitly disabled via build config 00:02:04.292 graph: explicitly disabled via build config 00:02:04.292 node: explicitly disabled via build config 00:02:04.292 00:02:04.292 drivers: 00:02:04.292 common/cpt: not in enabled drivers build config 00:02:04.292 common/dpaax: not in enabled drivers build config 00:02:04.292 common/iavf: not in enabled drivers build config 00:02:04.292 common/idpf: not in enabled drivers build config 00:02:04.292 common/mvep: not in enabled drivers build config 00:02:04.292 common/octeontx: not in enabled drivers build config 00:02:04.292 bus/auxiliary: not in enabled drivers build config 00:02:04.292 bus/cdx: not in enabled drivers build config 00:02:04.292 bus/dpaa: not in enabled drivers build config 00:02:04.292 bus/fslmc: not in enabled drivers build config 00:02:04.292 bus/ifpga: not in enabled drivers build config 00:02:04.292 bus/platform: not in enabled drivers build config 00:02:04.292 bus/vmbus: not in enabled drivers build config 00:02:04.292 common/cnxk: not in enabled drivers build config 00:02:04.292 common/mlx5: not in enabled drivers build config 00:02:04.292 common/nfp: not in enabled drivers build config 00:02:04.293 common/qat: not in enabled drivers build config 00:02:04.293 common/sfc_efx: not in enabled drivers build config 00:02:04.293 mempool/bucket: not in enabled drivers build config 00:02:04.293 mempool/cnxk: not in enabled drivers build config 00:02:04.293 mempool/dpaa: not in enabled drivers build config 00:02:04.293 mempool/dpaa2: not in enabled drivers build config 00:02:04.293 mempool/octeontx: not in enabled drivers build config 00:02:04.293 mempool/stack: not in enabled drivers build config 00:02:04.293 dma/cnxk: not in enabled drivers build config 00:02:04.293 dma/dpaa: not in enabled drivers build config 00:02:04.293 dma/dpaa2: not in enabled drivers build config 00:02:04.293 dma/hisilicon: not in enabled drivers build config 00:02:04.293 dma/idxd: not in enabled drivers build config 00:02:04.293 dma/ioat: not in enabled drivers build config 00:02:04.293 dma/skeleton: not in enabled drivers build config 00:02:04.293 net/af_packet: not in enabled drivers build config 00:02:04.293 net/af_xdp: not in enabled drivers build config 00:02:04.293 net/ark: not in enabled drivers build config 00:02:04.293 net/atlantic: not in enabled drivers build config 00:02:04.293 net/avp: not in enabled drivers build config 00:02:04.293 net/axgbe: not in enabled drivers build config 00:02:04.293 net/bnx2x: not in enabled drivers build config 00:02:04.293 net/bnxt: not in enabled drivers build config 00:02:04.293 net/bonding: not in enabled drivers build config 00:02:04.293 net/cnxk: not in enabled drivers build config 00:02:04.293 net/cpfl: not in enabled drivers build config 00:02:04.293 net/cxgbe: not in enabled drivers build config 00:02:04.293 net/dpaa: not in enabled drivers build config 00:02:04.293 net/dpaa2: not in enabled drivers build config 00:02:04.293 net/e1000: not in enabled drivers build config 00:02:04.293 net/ena: not in enabled drivers build config 00:02:04.293 net/enetc: not in enabled drivers build config 00:02:04.293 net/enetfec: not in enabled drivers build config 00:02:04.293 net/enic: not in enabled drivers build config 00:02:04.293 net/failsafe: not in enabled drivers build config 00:02:04.293 net/fm10k: not in enabled drivers build config 00:02:04.293 net/gve: not in enabled drivers build config 00:02:04.293 net/hinic: not in enabled drivers build config 00:02:04.293 net/hns3: not in enabled drivers build config 00:02:04.293 net/i40e: not in enabled drivers build config 00:02:04.293 net/iavf: not in enabled drivers build config 00:02:04.293 net/ice: not in enabled drivers build config 00:02:04.293 net/idpf: not in enabled drivers build config 00:02:04.293 net/igc: not in enabled drivers build config 00:02:04.293 net/ionic: not in enabled drivers build config 00:02:04.293 net/ipn3ke: not in enabled drivers build config 00:02:04.293 net/ixgbe: not in enabled drivers build config 00:02:04.293 net/mana: not in enabled drivers build config 00:02:04.293 net/memif: not in enabled drivers build config 00:02:04.293 net/mlx4: not in enabled drivers build config 00:02:04.293 net/mlx5: not in enabled drivers build config 00:02:04.293 net/mvneta: not in enabled drivers build config 00:02:04.293 net/mvpp2: not in enabled drivers build config 00:02:04.293 net/netvsc: not in enabled drivers build config 00:02:04.293 net/nfb: not in enabled drivers build config 00:02:04.293 net/nfp: not in enabled drivers build config 00:02:04.293 net/ngbe: not in enabled drivers build config 00:02:04.293 net/null: not in enabled drivers build config 00:02:04.293 net/octeontx: not in enabled drivers build config 00:02:04.293 net/octeon_ep: not in enabled drivers build config 00:02:04.293 net/pcap: not in enabled drivers build config 00:02:04.293 net/pfe: not in enabled drivers build config 00:02:04.293 net/qede: not in enabled drivers build config 00:02:04.293 net/ring: not in enabled drivers build config 00:02:04.293 net/sfc: not in enabled drivers build config 00:02:04.293 net/softnic: not in enabled drivers build config 00:02:04.293 net/tap: not in enabled drivers build config 00:02:04.293 net/thunderx: not in enabled drivers build config 00:02:04.293 net/txgbe: not in enabled drivers build config 00:02:04.293 net/vdev_netvsc: not in enabled drivers build config 00:02:04.293 net/vhost: not in enabled drivers build config 00:02:04.293 net/virtio: not in enabled drivers build config 00:02:04.293 net/vmxnet3: not in enabled drivers build config 00:02:04.293 raw/*: missing internal dependency, "rawdev" 00:02:04.293 crypto/armv8: not in enabled drivers build config 00:02:04.293 crypto/bcmfs: not in enabled drivers build config 00:02:04.293 crypto/caam_jr: not in enabled drivers build config 00:02:04.293 crypto/ccp: not in enabled drivers build config 00:02:04.293 crypto/cnxk: not in enabled drivers build config 00:02:04.293 crypto/dpaa_sec: not in enabled drivers build config 00:02:04.293 crypto/dpaa2_sec: not in enabled drivers build config 00:02:04.293 crypto/ipsec_mb: not in enabled drivers build config 00:02:04.293 crypto/mlx5: not in enabled drivers build config 00:02:04.293 crypto/mvsam: not in enabled drivers build config 00:02:04.293 crypto/nitrox: not in enabled drivers build config 00:02:04.293 crypto/null: not in enabled drivers build config 00:02:04.293 crypto/octeontx: not in enabled drivers build config 00:02:04.293 crypto/openssl: not in enabled drivers build config 00:02:04.293 crypto/scheduler: not in enabled drivers build config 00:02:04.293 crypto/uadk: not in enabled drivers build config 00:02:04.293 crypto/virtio: not in enabled drivers build config 00:02:04.293 compress/isal: not in enabled drivers build config 00:02:04.293 compress/mlx5: not in enabled drivers build config 00:02:04.293 compress/octeontx: not in enabled drivers build config 00:02:04.293 compress/zlib: not in enabled drivers build config 00:02:04.293 regex/*: missing internal dependency, "regexdev" 00:02:04.293 ml/*: missing internal dependency, "mldev" 00:02:04.293 vdpa/ifc: not in enabled drivers build config 00:02:04.293 vdpa/mlx5: not in enabled drivers build config 00:02:04.293 vdpa/nfp: not in enabled drivers build config 00:02:04.293 vdpa/sfc: not in enabled drivers build config 00:02:04.293 event/*: missing internal dependency, "eventdev" 00:02:04.293 baseband/*: missing internal dependency, "bbdev" 00:02:04.293 gpu/*: missing internal dependency, "gpudev" 00:02:04.293 00:02:04.293 00:02:04.293 Build targets in project: 85 00:02:04.293 00:02:04.293 DPDK 23.11.0 00:02:04.293 00:02:04.293 User defined options 00:02:04.293 buildtype : debug 00:02:04.293 default_library : static 00:02:04.293 libdir : lib 00:02:04.293 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.293 b_sanitize : address 00:02:04.293 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:02:04.293 c_link_args : 00:02:04.293 cpu_instruction_set: native 00:02:04.293 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:02:04.293 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:02:04.293 enable_docs : false 00:02:04.293 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:04.293 enable_kmods : false 00:02:04.293 tests : false 00:02:04.293 00:02:04.293 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:04.552 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:04.811 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:04.811 [2/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:04.811 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:04.811 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:04.811 [5/265] Linking static target lib/librte_kvargs.a 00:02:04.811 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:04.811 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:04.811 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:04.811 [9/265] Linking static target lib/librte_log.a 00:02:04.811 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.069 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.069 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.069 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.069 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.327 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.327 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.327 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.327 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.327 [19/265] Linking static target lib/librte_telemetry.a 00:02:05.327 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.585 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.585 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:05.585 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.585 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.586 [25/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.844 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.844 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:05.844 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.844 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.844 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.844 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.844 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.844 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:06.102 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:06.102 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.102 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:06.102 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:06.102 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:06.360 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:06.360 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.360 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.360 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:06.360 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:06.618 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:06.618 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:06.618 [46/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:06.618 [47/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:06.618 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:06.618 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:06.618 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.878 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:06.878 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:06.878 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:06.878 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:06.878 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:06.878 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.137 [57/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:07.137 [58/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.137 [59/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.137 [60/265] Linking target lib/librte_log.so.24.0 00:02:07.137 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:07.137 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:07.137 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:07.137 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:07.137 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:07.137 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:07.396 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:07.396 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:07.396 [69/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:07.396 [70/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:07.396 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:07.396 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.396 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:07.396 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:07.396 [75/265] Linking target lib/librte_kvargs.so.24.0 00:02:07.396 [76/265] Linking target lib/librte_telemetry.so.24.0 00:02:07.396 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:07.655 [78/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:07.655 [79/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:07.655 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:07.655 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:07.655 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:07.655 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:07.655 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:07.914 [85/265] Linking static target lib/librte_eal.a 00:02:07.914 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:07.914 [87/265] Linking static target lib/librte_ring.a 00:02:07.914 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:07.914 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:07.914 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.178 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.178 [92/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:08.178 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.178 [94/265] Linking static target lib/librte_rcu.a 00:02:08.178 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.440 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:08.440 [97/265] Linking static target lib/librte_mempool.a 00:02:08.440 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.441 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.441 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.441 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.441 [102/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.698 [103/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.698 [104/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.699 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.699 [106/265] Linking static target lib/librte_net.a 00:02:08.699 [107/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.699 [108/265] Linking static target lib/librte_meter.a 00:02:08.699 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:08.699 [110/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.699 [111/265] Linking static target lib/librte_mbuf.a 00:02:08.958 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.958 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.216 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.216 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.216 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:09.216 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.216 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:09.476 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:09.734 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:09.734 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.734 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.734 [123/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.734 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.734 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:09.734 [126/265] Linking static target lib/librte_pci.a 00:02:09.734 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:09.993 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.993 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:09.993 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:09.993 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:09.993 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:09.993 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:09.993 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.993 [135/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.993 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.993 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.993 [138/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.251 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:10.251 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:10.251 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:10.251 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:10.251 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:10.251 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:10.251 [145/265] Linking static target lib/librte_cmdline.a 00:02:10.510 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.510 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.510 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:10.510 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.510 [150/265] Linking static target lib/librte_timer.a 00:02:10.510 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.510 [152/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.767 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.767 [154/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:10.767 [155/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.767 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.767 [157/265] Linking static target lib/librte_compressdev.a 00:02:11.026 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.026 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.026 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:11.026 [161/265] Linking static target lib/librte_dmadev.a 00:02:11.026 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.026 [163/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:11.026 [164/265] Linking static target lib/librte_ethdev.a 00:02:11.299 [165/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.299 [166/265] Linking static target lib/librte_hash.a 00:02:11.299 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.299 [168/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:11.299 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:11.299 [170/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.299 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:11.581 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.581 [173/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.581 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:11.581 [175/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:11.581 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:11.581 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:11.581 [178/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:11.841 [179/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.841 [180/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.841 [181/265] Linking static target lib/librte_cryptodev.a 00:02:11.841 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:11.841 [183/265] Linking static target lib/librte_power.a 00:02:12.100 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:12.100 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:12.100 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:12.100 [187/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:12.100 [188/265] Linking static target lib/librte_reorder.a 00:02:12.100 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:12.100 [190/265] Linking static target lib/librte_security.a 00:02:12.358 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.617 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.618 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.618 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.618 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.876 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:12.876 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:12.876 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:12.876 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.136 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.136 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.136 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.136 [203/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.136 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.136 [205/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.396 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.396 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.396 [208/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.396 [209/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.396 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.396 [211/265] Linking static target drivers/librte_bus_vdev.a 00:02:13.396 [212/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.655 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.655 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.655 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.655 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:13.655 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.655 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.915 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.915 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.915 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.915 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.915 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:14.174 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.180 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.467 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.467 [227/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.467 [228/265] Linking target lib/librte_eal.so.24.0 00:02:18.467 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:18.467 [230/265] Linking target lib/librte_pci.so.24.0 00:02:18.467 [231/265] Linking target lib/librte_dmadev.so.24.0 00:02:18.467 [232/265] Linking target lib/librte_meter.so.24.0 00:02:18.467 [233/265] Linking target lib/librte_ring.so.24.0 00:02:18.467 [234/265] Linking target lib/librte_timer.so.24.0 00:02:18.467 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:18.467 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:18.467 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:18.467 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:18.467 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:18.467 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:18.467 [241/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:18.467 [242/265] Linking target lib/librte_rcu.so.24.0 00:02:18.468 [243/265] Linking target lib/librte_mempool.so.24.0 00:02:18.726 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:18.726 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:18.726 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:18.726 [247/265] Linking target lib/librte_mbuf.so.24.0 00:02:18.726 [248/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.985 [249/265] Linking static target lib/librte_vhost.a 00:02:18.985 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:18.985 [251/265] Linking target lib/librte_compressdev.so.24.0 00:02:18.985 [252/265] Linking target lib/librte_net.so.24.0 00:02:18.985 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:18.985 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:02:19.242 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:19.242 [256/265] Linking target lib/librte_hash.so.24.0 00:02:19.242 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:19.242 [258/265] Linking target lib/librte_cmdline.so.24.0 00:02:19.243 [259/265] Linking target lib/librte_ethdev.so.24.0 00:02:19.243 [260/265] Linking target lib/librte_security.so.24.0 00:02:19.243 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:19.514 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:19.514 [263/265] Linking target lib/librte_power.so.24.0 00:02:21.445 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.445 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:21.445 INFO: autodetecting backend as ninja 00:02:21.445 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:22.382 CC lib/ut/ut.o 00:02:22.382 CC lib/log/log.o 00:02:22.382 CC lib/ut_mock/mock.o 00:02:22.382 CC lib/log/log_flags.o 00:02:22.382 CC lib/log/log_deprecated.o 00:02:22.641 LIB libspdk_ut_mock.a 00:02:22.641 LIB libspdk_log.a 00:02:22.641 LIB libspdk_ut.a 00:02:22.900 CC lib/ioat/ioat.o 00:02:22.900 CXX lib/trace_parser/trace.o 00:02:22.900 CC lib/util/base64.o 00:02:22.900 CC lib/util/bit_array.o 00:02:22.900 CC lib/dma/dma.o 00:02:22.900 CC lib/util/cpuset.o 00:02:22.900 CC lib/util/crc16.o 00:02:22.900 CC lib/util/crc32.o 00:02:22.900 CC lib/util/crc32c.o 00:02:22.900 CC lib/vfio_user/host/vfio_user_pci.o 00:02:22.900 CC lib/util/crc32_ieee.o 00:02:23.159 CC lib/vfio_user/host/vfio_user.o 00:02:23.159 CC lib/util/crc64.o 00:02:23.159 CC lib/util/dif.o 00:02:23.159 CC lib/util/fd.o 00:02:23.159 CC lib/util/file.o 00:02:23.159 CC lib/util/hexlify.o 00:02:23.159 LIB libspdk_dma.a 00:02:23.159 CC lib/util/iov.o 00:02:23.159 LIB libspdk_ioat.a 00:02:23.159 CC lib/util/math.o 00:02:23.159 CC lib/util/pipe.o 00:02:23.159 CC lib/util/strerror_tls.o 00:02:23.159 CC lib/util/string.o 00:02:23.159 CC lib/util/uuid.o 00:02:23.159 LIB libspdk_vfio_user.a 00:02:23.417 CC lib/util/fd_group.o 00:02:23.417 CC lib/util/xor.o 00:02:23.417 CC lib/util/zipf.o 00:02:23.675 LIB libspdk_util.a 00:02:23.966 CC lib/rdma/common.o 00:02:23.966 CC lib/rdma/rdma_verbs.o 00:02:23.966 CC lib/env_dpdk/env.o 00:02:23.966 CC lib/env_dpdk/pci.o 00:02:23.966 CC lib/env_dpdk/memory.o 00:02:23.966 CC lib/idxd/idxd.o 00:02:23.966 CC lib/json/json_parse.o 00:02:23.966 CC lib/conf/conf.o 00:02:23.966 CC lib/vmd/vmd.o 00:02:23.966 LIB libspdk_trace_parser.a 00:02:24.223 CC lib/vmd/led.o 00:02:24.223 CC lib/json/json_util.o 00:02:24.223 LIB libspdk_conf.a 00:02:24.223 LIB libspdk_rdma.a 00:02:24.223 CC lib/idxd/idxd_user.o 00:02:24.224 CC lib/json/json_write.o 00:02:24.224 CC lib/env_dpdk/init.o 00:02:24.224 CC lib/env_dpdk/threads.o 00:02:24.224 CC lib/env_dpdk/pci_ioat.o 00:02:24.481 CC lib/env_dpdk/pci_virtio.o 00:02:24.481 CC lib/env_dpdk/pci_vmd.o 00:02:24.481 CC lib/env_dpdk/pci_idxd.o 00:02:24.481 CC lib/env_dpdk/pci_event.o 00:02:24.481 CC lib/env_dpdk/sigbus_handler.o 00:02:24.481 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.481 CC lib/env_dpdk/pci_dpdk.o 00:02:24.738 LIB libspdk_idxd.a 00:02:24.738 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.738 LIB libspdk_json.a 00:02:24.738 LIB libspdk_vmd.a 00:02:24.997 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.997 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.997 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.997 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:25.563 LIB libspdk_jsonrpc.a 00:02:25.563 CC lib/rpc/rpc.o 00:02:25.820 LIB libspdk_env_dpdk.a 00:02:25.820 LIB libspdk_rpc.a 00:02:26.079 CC lib/notify/notify.o 00:02:26.079 CC lib/notify/notify_rpc.o 00:02:26.079 CC lib/sock/sock.o 00:02:26.079 CC lib/sock/sock_rpc.o 00:02:26.079 CC lib/trace/trace.o 00:02:26.079 CC lib/trace/trace_flags.o 00:02:26.079 CC lib/trace/trace_rpc.o 00:02:26.337 LIB libspdk_notify.a 00:02:26.337 LIB libspdk_trace.a 00:02:26.594 CC lib/thread/thread.o 00:02:26.594 CC lib/thread/iobuf.o 00:02:26.852 LIB libspdk_sock.a 00:02:26.852 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.852 CC lib/nvme/nvme_fabric.o 00:02:26.852 CC lib/nvme/nvme_ctrlr.o 00:02:26.852 CC lib/nvme/nvme_ns_cmd.o 00:02:26.852 CC lib/nvme/nvme_ns.o 00:02:26.852 CC lib/nvme/nvme_pcie_common.o 00:02:26.852 CC lib/nvme/nvme_qpair.o 00:02:26.852 CC lib/nvme/nvme_pcie.o 00:02:27.110 CC lib/nvme/nvme.o 00:02:27.675 CC lib/nvme/nvme_quirks.o 00:02:27.675 CC lib/nvme/nvme_transport.o 00:02:27.675 CC lib/nvme/nvme_discovery.o 00:02:27.675 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:27.932 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:27.932 CC lib/nvme/nvme_tcp.o 00:02:28.190 CC lib/nvme/nvme_opal.o 00:02:28.190 CC lib/nvme/nvme_io_msg.o 00:02:28.190 CC lib/nvme/nvme_poll_group.o 00:02:28.190 CC lib/nvme/nvme_zns.o 00:02:28.449 CC lib/nvme/nvme_cuse.o 00:02:28.449 CC lib/nvme/nvme_vfio_user.o 00:02:28.449 CC lib/nvme/nvme_rdma.o 00:02:28.707 LIB libspdk_thread.a 00:02:28.707 CC lib/accel/accel.o 00:02:28.707 CC lib/accel/accel_rpc.o 00:02:28.967 CC lib/init/json_config.o 00:02:28.967 CC lib/blob/blobstore.o 00:02:28.967 CC lib/blob/request.o 00:02:28.967 CC lib/virtio/virtio.o 00:02:28.967 CC lib/accel/accel_sw.o 00:02:29.225 CC lib/init/subsystem.o 00:02:29.225 CC lib/init/subsystem_rpc.o 00:02:29.225 CC lib/init/rpc.o 00:02:29.225 CC lib/blob/zeroes.o 00:02:29.482 CC lib/blob/blob_bs_dev.o 00:02:29.482 CC lib/virtio/virtio_vhost_user.o 00:02:29.482 CC lib/virtio/virtio_vfio_user.o 00:02:29.483 LIB libspdk_init.a 00:02:29.483 CC lib/virtio/virtio_pci.o 00:02:29.740 CC lib/event/app.o 00:02:29.740 CC lib/event/log_rpc.o 00:02:29.740 CC lib/event/reactor.o 00:02:29.740 CC lib/event/app_rpc.o 00:02:29.740 CC lib/event/scheduler_static.o 00:02:29.740 LIB libspdk_virtio.a 00:02:29.740 LIB libspdk_nvme.a 00:02:29.999 LIB libspdk_accel.a 00:02:29.999 LIB libspdk_event.a 00:02:30.257 CC lib/bdev/bdev.o 00:02:30.257 CC lib/bdev/bdev_rpc.o 00:02:30.257 CC lib/bdev/bdev_zone.o 00:02:30.257 CC lib/bdev/part.o 00:02:30.257 CC lib/bdev/scsi_nvme.o 00:02:32.787 LIB libspdk_blob.a 00:02:33.046 CC lib/lvol/lvol.o 00:02:33.046 CC lib/blobfs/blobfs.o 00:02:33.046 CC lib/blobfs/tree.o 00:02:33.046 LIB libspdk_bdev.a 00:02:33.304 CC lib/ftl/ftl_core.o 00:02:33.304 CC lib/ftl/ftl_init.o 00:02:33.304 CC lib/ftl/ftl_layout.o 00:02:33.304 CC lib/nbd/nbd.o 00:02:33.304 CC lib/ftl/ftl_io.o 00:02:33.304 CC lib/nvmf/ctrlr.o 00:02:33.304 CC lib/ftl/ftl_debug.o 00:02:33.304 CC lib/scsi/dev.o 00:02:33.562 CC lib/scsi/lun.o 00:02:33.562 CC lib/ftl/ftl_sb.o 00:02:33.562 CC lib/nbd/nbd_rpc.o 00:02:33.821 CC lib/scsi/port.o 00:02:33.821 CC lib/scsi/scsi.o 00:02:33.821 CC lib/ftl/ftl_l2p.o 00:02:33.821 CC lib/scsi/scsi_bdev.o 00:02:33.821 CC lib/scsi/scsi_pr.o 00:02:33.821 LIB libspdk_nbd.a 00:02:33.821 CC lib/scsi/scsi_rpc.o 00:02:33.821 CC lib/scsi/task.o 00:02:34.079 CC lib/ftl/ftl_l2p_flat.o 00:02:34.079 CC lib/nvmf/ctrlr_discovery.o 00:02:34.079 LIB libspdk_blobfs.a 00:02:34.079 CC lib/ftl/ftl_nv_cache.o 00:02:34.079 CC lib/ftl/ftl_band.o 00:02:34.079 CC lib/ftl/ftl_band_ops.o 00:02:34.079 CC lib/ftl/ftl_writer.o 00:02:34.079 LIB libspdk_lvol.a 00:02:34.337 CC lib/ftl/ftl_rq.o 00:02:34.337 CC lib/ftl/ftl_reloc.o 00:02:34.337 CC lib/ftl/ftl_l2p_cache.o 00:02:34.337 CC lib/nvmf/ctrlr_bdev.o 00:02:34.595 CC lib/nvmf/subsystem.o 00:02:34.595 CC lib/ftl/ftl_p2l.o 00:02:34.595 LIB libspdk_scsi.a 00:02:34.595 CC lib/ftl/mngt/ftl_mngt.o 00:02:34.595 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:34.595 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:34.595 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:34.854 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:34.854 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:34.854 CC lib/nvmf/nvmf.o 00:02:35.112 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:35.113 CC lib/nvmf/nvmf_rpc.o 00:02:35.113 CC lib/iscsi/conn.o 00:02:35.113 CC lib/iscsi/init_grp.o 00:02:35.113 CC lib/iscsi/iscsi.o 00:02:35.113 CC lib/iscsi/md5.o 00:02:35.371 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:35.371 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:35.371 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:35.371 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:35.630 CC lib/vhost/vhost.o 00:02:35.630 CC lib/vhost/vhost_rpc.o 00:02:35.630 CC lib/vhost/vhost_scsi.o 00:02:35.630 CC lib/vhost/vhost_blk.o 00:02:35.888 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:35.888 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:36.146 CC lib/nvmf/transport.o 00:02:36.146 CC lib/ftl/utils/ftl_conf.o 00:02:36.146 CC lib/ftl/utils/ftl_md.o 00:02:36.146 CC lib/vhost/rte_vhost_user.o 00:02:36.146 CC lib/iscsi/param.o 00:02:36.146 CC lib/iscsi/portal_grp.o 00:02:36.405 CC lib/iscsi/tgt_node.o 00:02:36.405 CC lib/iscsi/iscsi_subsystem.o 00:02:36.663 CC lib/ftl/utils/ftl_mempool.o 00:02:36.664 CC lib/ftl/utils/ftl_bitmap.o 00:02:36.664 CC lib/ftl/utils/ftl_property.o 00:02:36.664 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:36.664 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:36.922 CC lib/iscsi/iscsi_rpc.o 00:02:36.922 CC lib/iscsi/task.o 00:02:36.922 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:36.922 CC lib/nvmf/tcp.o 00:02:36.922 CC lib/nvmf/rdma.o 00:02:36.922 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:36.922 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:36.922 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:36.922 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:37.180 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:37.180 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:37.180 LIB libspdk_vhost.a 00:02:37.180 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:37.180 CC lib/ftl/base/ftl_base_dev.o 00:02:37.180 CC lib/ftl/base/ftl_base_bdev.o 00:02:37.180 CC lib/ftl/ftl_trace.o 00:02:37.180 LIB libspdk_iscsi.a 00:02:37.438 LIB libspdk_ftl.a 00:02:39.993 LIB libspdk_nvmf.a 00:02:39.993 CC module/env_dpdk/env_dpdk_rpc.o 00:02:40.251 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:40.251 CC module/accel/dsa/accel_dsa.o 00:02:40.251 CC module/scheduler/gscheduler/gscheduler.o 00:02:40.251 CC module/accel/ioat/accel_ioat.o 00:02:40.251 CC module/accel/iaa/accel_iaa.o 00:02:40.251 CC module/blob/bdev/blob_bdev.o 00:02:40.251 CC module/sock/posix/posix.o 00:02:40.251 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:40.251 CC module/accel/error/accel_error.o 00:02:40.251 LIB libspdk_env_dpdk_rpc.a 00:02:40.251 CC module/accel/error/accel_error_rpc.o 00:02:40.251 LIB libspdk_scheduler_dpdk_governor.a 00:02:40.251 LIB libspdk_scheduler_gscheduler.a 00:02:40.510 CC module/accel/ioat/accel_ioat_rpc.o 00:02:40.510 CC module/accel/iaa/accel_iaa_rpc.o 00:02:40.510 CC module/accel/dsa/accel_dsa_rpc.o 00:02:40.510 LIB libspdk_scheduler_dynamic.a 00:02:40.510 LIB libspdk_accel_error.a 00:02:40.510 LIB libspdk_blob_bdev.a 00:02:40.510 LIB libspdk_accel_dsa.a 00:02:40.510 LIB libspdk_accel_iaa.a 00:02:40.510 LIB libspdk_accel_ioat.a 00:02:40.768 CC module/bdev/malloc/bdev_malloc.o 00:02:40.768 CC module/bdev/lvol/vbdev_lvol.o 00:02:40.768 CC module/bdev/delay/vbdev_delay.o 00:02:40.768 CC module/blobfs/bdev/blobfs_bdev.o 00:02:40.768 CC module/bdev/gpt/gpt.o 00:02:40.768 CC module/bdev/nvme/bdev_nvme.o 00:02:40.768 CC module/bdev/error/vbdev_error.o 00:02:40.768 CC module/bdev/null/bdev_null.o 00:02:40.768 CC module/bdev/passthru/vbdev_passthru.o 00:02:41.026 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:41.026 CC module/bdev/gpt/vbdev_gpt.o 00:02:41.026 CC module/bdev/error/vbdev_error_rpc.o 00:02:41.026 CC module/bdev/null/bdev_null_rpc.o 00:02:41.284 LIB libspdk_sock_posix.a 00:02:41.284 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:41.284 LIB libspdk_blobfs_bdev.a 00:02:41.284 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:41.284 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:41.284 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:41.284 CC module/bdev/nvme/nvme_rpc.o 00:02:41.284 LIB libspdk_bdev_error.a 00:02:41.284 LIB libspdk_bdev_gpt.a 00:02:41.284 LIB libspdk_bdev_null.a 00:02:41.284 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:41.284 LIB libspdk_bdev_delay.a 00:02:41.542 LIB libspdk_bdev_passthru.a 00:02:41.542 CC module/bdev/nvme/bdev_mdns_client.o 00:02:41.542 LIB libspdk_bdev_malloc.a 00:02:41.542 CC module/bdev/raid/bdev_raid.o 00:02:41.542 CC module/bdev/split/vbdev_split.o 00:02:41.542 CC module/bdev/split/vbdev_split_rpc.o 00:02:41.542 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:41.542 CC module/bdev/aio/bdev_aio.o 00:02:41.542 CC module/bdev/ftl/bdev_ftl.o 00:02:41.864 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:41.864 CC module/bdev/raid/bdev_raid_rpc.o 00:02:41.864 LIB libspdk_bdev_lvol.a 00:02:41.864 LIB libspdk_bdev_split.a 00:02:41.864 CC module/bdev/raid/bdev_raid_sb.o 00:02:41.864 CC module/bdev/raid/raid0.o 00:02:41.864 CC module/bdev/aio/bdev_aio_rpc.o 00:02:41.864 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:41.864 CC module/bdev/nvme/vbdev_opal.o 00:02:42.161 LIB libspdk_bdev_ftl.a 00:02:42.161 LIB libspdk_bdev_aio.a 00:02:42.161 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.161 CC module/bdev/raid/raid1.o 00:02:42.161 LIB libspdk_bdev_zone_block.a 00:02:42.161 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.161 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.161 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.161 CC module/bdev/iscsi/bdev_iscsi.o 00:02:42.161 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.161 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.424 CC module/bdev/raid/concat.o 00:02:42.424 LIB libspdk_bdev_iscsi.a 00:02:42.683 LIB libspdk_bdev_raid.a 00:02:42.683 LIB libspdk_bdev_virtio.a 00:02:43.249 LIB libspdk_bdev_nvme.a 00:02:43.816 CC module/event/subsystems/scheduler/scheduler.o 00:02:43.816 CC module/event/subsystems/sock/sock.o 00:02:43.816 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:43.816 CC module/event/subsystems/vmd/vmd.o 00:02:43.816 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:43.816 CC module/event/subsystems/iobuf/iobuf.o 00:02:43.816 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:43.816 LIB libspdk_event_scheduler.a 00:02:43.816 LIB libspdk_event_sock.a 00:02:43.816 LIB libspdk_event_vhost_blk.a 00:02:43.816 LIB libspdk_event_iobuf.a 00:02:43.816 LIB libspdk_event_vmd.a 00:02:44.076 CC module/event/subsystems/accel/accel.o 00:02:44.334 LIB libspdk_event_accel.a 00:02:44.593 CC module/event/subsystems/bdev/bdev.o 00:02:44.852 LIB libspdk_event_bdev.a 00:02:44.852 CC module/event/subsystems/scsi/scsi.o 00:02:44.852 CC module/event/subsystems/nbd/nbd.o 00:02:44.852 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:44.852 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:45.110 LIB libspdk_event_nbd.a 00:02:45.110 LIB libspdk_event_scsi.a 00:02:45.367 LIB libspdk_event_nvmf.a 00:02:45.367 CC module/event/subsystems/iscsi/iscsi.o 00:02:45.367 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:45.626 LIB libspdk_event_vhost_scsi.a 00:02:45.626 LIB libspdk_event_iscsi.a 00:02:45.626 TEST_HEADER include/spdk/accel.h 00:02:45.626 TEST_HEADER include/spdk/accel_module.h 00:02:45.626 TEST_HEADER include/spdk/assert.h 00:02:45.626 TEST_HEADER include/spdk/barrier.h 00:02:45.626 TEST_HEADER include/spdk/base64.h 00:02:45.885 TEST_HEADER include/spdk/bdev.h 00:02:45.885 CXX app/trace/trace.o 00:02:45.885 TEST_HEADER include/spdk/bdev_module.h 00:02:45.885 TEST_HEADER include/spdk/bdev_zone.h 00:02:45.885 TEST_HEADER include/spdk/bit_array.h 00:02:45.885 TEST_HEADER include/spdk/bit_pool.h 00:02:45.885 TEST_HEADER include/spdk/blob.h 00:02:45.885 TEST_HEADER include/spdk/blob_bdev.h 00:02:45.885 TEST_HEADER include/spdk/blobfs.h 00:02:45.885 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:45.885 TEST_HEADER include/spdk/conf.h 00:02:45.885 TEST_HEADER include/spdk/config.h 00:02:45.885 TEST_HEADER include/spdk/cpuset.h 00:02:45.885 TEST_HEADER include/spdk/crc16.h 00:02:45.885 TEST_HEADER include/spdk/crc32.h 00:02:45.885 CC test/event/event_perf/event_perf.o 00:02:45.885 CC examples/accel/perf/accel_perf.o 00:02:45.885 TEST_HEADER include/spdk/crc64.h 00:02:45.885 TEST_HEADER include/spdk/dif.h 00:02:45.885 TEST_HEADER include/spdk/dma.h 00:02:45.885 TEST_HEADER include/spdk/endian.h 00:02:45.885 TEST_HEADER include/spdk/env.h 00:02:45.885 TEST_HEADER include/spdk/env_dpdk.h 00:02:45.885 TEST_HEADER include/spdk/event.h 00:02:45.885 TEST_HEADER include/spdk/fd.h 00:02:45.885 TEST_HEADER include/spdk/fd_group.h 00:02:45.885 TEST_HEADER include/spdk/file.h 00:02:45.885 TEST_HEADER include/spdk/ftl.h 00:02:45.885 TEST_HEADER include/spdk/gpt_spec.h 00:02:45.885 CC test/app/bdev_svc/bdev_svc.o 00:02:45.885 TEST_HEADER include/spdk/hexlify.h 00:02:45.885 TEST_HEADER include/spdk/histogram_data.h 00:02:45.885 TEST_HEADER include/spdk/idxd.h 00:02:45.885 CC test/bdev/bdevio/bdevio.o 00:02:45.885 TEST_HEADER include/spdk/idxd_spec.h 00:02:45.885 CC test/blobfs/mkfs/mkfs.o 00:02:45.885 CC test/env/mem_callbacks/mem_callbacks.o 00:02:45.885 TEST_HEADER include/spdk/init.h 00:02:45.885 TEST_HEADER include/spdk/ioat.h 00:02:45.885 TEST_HEADER include/spdk/ioat_spec.h 00:02:45.885 CC test/dma/test_dma/test_dma.o 00:02:45.885 TEST_HEADER include/spdk/iscsi_spec.h 00:02:45.885 CC test/accel/dif/dif.o 00:02:45.885 TEST_HEADER include/spdk/json.h 00:02:45.885 TEST_HEADER include/spdk/jsonrpc.h 00:02:45.885 TEST_HEADER include/spdk/likely.h 00:02:45.885 TEST_HEADER include/spdk/log.h 00:02:45.885 TEST_HEADER include/spdk/lvol.h 00:02:45.885 TEST_HEADER include/spdk/memory.h 00:02:45.885 TEST_HEADER include/spdk/mmio.h 00:02:45.885 TEST_HEADER include/spdk/nbd.h 00:02:45.885 TEST_HEADER include/spdk/notify.h 00:02:45.885 TEST_HEADER include/spdk/nvme.h 00:02:45.885 TEST_HEADER include/spdk/nvme_intel.h 00:02:45.885 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:45.885 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:45.885 TEST_HEADER include/spdk/nvme_spec.h 00:02:45.885 TEST_HEADER include/spdk/nvme_zns.h 00:02:45.885 TEST_HEADER include/spdk/nvmf.h 00:02:45.885 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:45.885 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:45.885 TEST_HEADER include/spdk/nvmf_spec.h 00:02:45.885 TEST_HEADER include/spdk/nvmf_transport.h 00:02:45.885 TEST_HEADER include/spdk/opal.h 00:02:45.885 TEST_HEADER include/spdk/opal_spec.h 00:02:45.885 TEST_HEADER include/spdk/pci_ids.h 00:02:45.885 TEST_HEADER include/spdk/pipe.h 00:02:45.885 TEST_HEADER include/spdk/queue.h 00:02:45.885 TEST_HEADER include/spdk/reduce.h 00:02:45.885 TEST_HEADER include/spdk/rpc.h 00:02:45.885 TEST_HEADER include/spdk/scheduler.h 00:02:45.885 TEST_HEADER include/spdk/scsi.h 00:02:45.885 TEST_HEADER include/spdk/scsi_spec.h 00:02:45.885 TEST_HEADER include/spdk/sock.h 00:02:45.885 TEST_HEADER include/spdk/stdinc.h 00:02:45.885 TEST_HEADER include/spdk/string.h 00:02:45.885 TEST_HEADER include/spdk/thread.h 00:02:45.885 TEST_HEADER include/spdk/trace.h 00:02:45.885 TEST_HEADER include/spdk/trace_parser.h 00:02:45.885 TEST_HEADER include/spdk/tree.h 00:02:45.885 TEST_HEADER include/spdk/ublk.h 00:02:45.885 TEST_HEADER include/spdk/util.h 00:02:45.885 TEST_HEADER include/spdk/uuid.h 00:02:45.885 TEST_HEADER include/spdk/version.h 00:02:45.885 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:45.885 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:45.885 TEST_HEADER include/spdk/vhost.h 00:02:45.885 TEST_HEADER include/spdk/vmd.h 00:02:45.885 TEST_HEADER include/spdk/xor.h 00:02:45.885 TEST_HEADER include/spdk/zipf.h 00:02:45.885 CXX test/cpp_headers/accel.o 00:02:46.143 LINK event_perf 00:02:46.144 LINK bdev_svc 00:02:46.144 LINK mkfs 00:02:46.144 CXX test/cpp_headers/accel_module.o 00:02:46.414 LINK spdk_trace 00:02:46.414 LINK test_dma 00:02:46.414 LINK accel_perf 00:02:46.414 LINK bdevio 00:02:46.414 LINK dif 00:02:46.414 CXX test/cpp_headers/assert.o 00:02:46.414 LINK mem_callbacks 00:02:46.681 CXX test/cpp_headers/barrier.o 00:02:46.681 CXX test/cpp_headers/base64.o 00:02:46.939 CC app/trace_record/trace_record.o 00:02:46.939 CXX test/cpp_headers/bdev.o 00:02:46.939 CC test/env/vtophys/vtophys.o 00:02:47.195 LINK spdk_trace_record 00:02:47.195 CXX test/cpp_headers/bdev_module.o 00:02:47.195 LINK vtophys 00:02:47.195 CC test/event/reactor/reactor.o 00:02:47.452 CXX test/cpp_headers/bdev_zone.o 00:02:47.452 LINK reactor 00:02:47.452 CXX test/cpp_headers/bit_array.o 00:02:47.709 CXX test/cpp_headers/bit_pool.o 00:02:47.709 CC app/nvmf_tgt/nvmf_main.o 00:02:47.967 CXX test/cpp_headers/blob.o 00:02:47.967 CC examples/bdev/hello_world/hello_bdev.o 00:02:47.967 LINK nvmf_tgt 00:02:47.967 CXX test/cpp_headers/blob_bdev.o 00:02:48.225 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:48.225 LINK hello_bdev 00:02:48.225 CXX test/cpp_headers/blobfs.o 00:02:48.225 LINK env_dpdk_post_init 00:02:48.483 CC test/event/reactor_perf/reactor_perf.o 00:02:48.483 CXX test/cpp_headers/blobfs_bdev.o 00:02:48.483 LINK reactor_perf 00:02:48.740 CXX test/cpp_headers/conf.o 00:02:48.740 CXX test/cpp_headers/config.o 00:02:48.740 CXX test/cpp_headers/cpuset.o 00:02:48.998 CXX test/cpp_headers/crc16.o 00:02:49.255 CXX test/cpp_headers/crc32.o 00:02:49.255 CXX test/cpp_headers/crc64.o 00:02:49.512 CXX test/cpp_headers/dif.o 00:02:49.512 CC test/event/app_repeat/app_repeat.o 00:02:49.770 CXX test/cpp_headers/dma.o 00:02:49.770 LINK app_repeat 00:02:50.026 CXX test/cpp_headers/endian.o 00:02:50.027 CC test/env/memory/memory_ut.o 00:02:50.027 CXX test/cpp_headers/env.o 00:02:50.284 CC test/app/histogram_perf/histogram_perf.o 00:02:50.284 CXX test/cpp_headers/env_dpdk.o 00:02:50.284 CC test/env/pci/pci_ut.o 00:02:50.284 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.541 LINK histogram_perf 00:02:50.541 CXX test/cpp_headers/event.o 00:02:50.541 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.541 CXX test/cpp_headers/fd.o 00:02:50.798 LINK pci_ut 00:02:50.798 CC test/event/scheduler/scheduler.o 00:02:50.798 CXX test/cpp_headers/fd_group.o 00:02:50.798 LINK memory_ut 00:02:50.798 LINK nvme_fuzz 00:02:51.059 LINK scheduler 00:02:51.059 CXX test/cpp_headers/file.o 00:02:51.317 CXX test/cpp_headers/ftl.o 00:02:51.317 CC examples/blob/hello_world/hello_blob.o 00:02:51.317 CXX test/cpp_headers/gpt_spec.o 00:02:51.317 LINK bdevperf 00:02:51.574 CC examples/ioat/perf/perf.o 00:02:51.574 CXX test/cpp_headers/hexlify.o 00:02:51.574 CXX test/cpp_headers/histogram_data.o 00:02:51.574 CC examples/ioat/verify/verify.o 00:02:51.574 LINK hello_blob 00:02:51.832 LINK ioat_perf 00:02:51.832 CXX test/cpp_headers/idxd.o 00:02:51.832 CC examples/nvme/hello_world/hello_world.o 00:02:51.832 LINK verify 00:02:51.832 CXX test/cpp_headers/idxd_spec.o 00:02:52.089 CXX test/cpp_headers/init.o 00:02:52.089 CXX test/cpp_headers/ioat.o 00:02:52.089 LINK hello_world 00:02:52.347 CC app/iscsi_tgt/iscsi_tgt.o 00:02:52.347 CC test/app/jsoncat/jsoncat.o 00:02:52.347 CXX test/cpp_headers/ioat_spec.o 00:02:52.605 CC test/app/stub/stub.o 00:02:52.605 LINK jsoncat 00:02:52.605 CXX test/cpp_headers/iscsi_spec.o 00:02:52.605 LINK iscsi_tgt 00:02:52.605 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:52.605 LINK stub 00:02:52.605 CXX test/cpp_headers/json.o 00:02:52.863 CXX test/cpp_headers/jsonrpc.o 00:02:52.863 CC examples/sock/hello_world/hello_sock.o 00:02:53.121 CXX test/cpp_headers/likely.o 00:02:53.378 LINK hello_sock 00:02:53.378 CXX test/cpp_headers/log.o 00:02:53.636 CXX test/cpp_headers/lvol.o 00:02:53.636 CXX test/cpp_headers/memory.o 00:02:53.893 CXX test/cpp_headers/mmio.o 00:02:53.893 CC examples/vmd/lsvmd/lsvmd.o 00:02:54.459 LINK lsvmd 00:02:54.459 CXX test/cpp_headers/nbd.o 00:02:54.459 CXX test/cpp_headers/notify.o 00:02:54.459 CC examples/nvme/reconnect/reconnect.o 00:02:54.459 CXX test/cpp_headers/nvme.o 00:02:54.717 CXX test/cpp_headers/nvme_intel.o 00:02:54.717 LINK iscsi_fuzz 00:02:54.717 CC examples/vmd/led/led.o 00:02:54.975 CXX test/cpp_headers/nvme_ocssd.o 00:02:54.975 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:54.975 LINK led 00:02:55.234 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.493 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.493 CXX test/cpp_headers/nvme_spec.o 00:02:55.493 LINK reconnect 00:02:55.493 CXX test/cpp_headers/nvme_zns.o 00:02:55.750 CC test/lvol/esnap/esnap.o 00:02:55.750 CC examples/blob/cli/blobcli.o 00:02:55.750 CXX test/cpp_headers/nvmf.o 00:02:56.009 CC examples/nvmf/nvmf/nvmf.o 00:02:56.009 CXX test/cpp_headers/nvmf_cmd.o 00:02:56.009 LINK vhost_fuzz 00:02:56.266 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:56.266 CXX test/cpp_headers/nvmf_spec.o 00:02:56.266 LINK nvmf 00:02:56.525 CXX test/cpp_headers/nvmf_transport.o 00:02:56.525 CC examples/util/zipf/zipf.o 00:02:56.783 LINK blobcli 00:02:56.783 CXX test/cpp_headers/opal.o 00:02:56.783 CC app/spdk_tgt/spdk_tgt.o 00:02:56.783 CC examples/thread/thread/thread_ex.o 00:02:56.783 LINK zipf 00:02:57.041 CXX test/cpp_headers/opal_spec.o 00:02:57.041 LINK spdk_tgt 00:02:57.041 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.041 CC examples/idxd/perf/perf.o 00:02:57.299 LINK thread 00:02:57.299 CXX test/cpp_headers/pci_ids.o 00:02:57.299 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:57.556 CXX test/cpp_headers/pipe.o 00:02:57.556 LINK idxd_perf 00:02:57.556 LINK interrupt_tgt 00:02:57.556 CXX test/cpp_headers/queue.o 00:02:57.556 CXX test/cpp_headers/reduce.o 00:02:57.556 LINK nvme_manage 00:02:57.815 CXX test/cpp_headers/rpc.o 00:02:57.815 CC test/nvme/aer/aer.o 00:02:58.073 CXX test/cpp_headers/scheduler.o 00:02:58.073 CXX test/cpp_headers/scsi.o 00:02:58.333 CXX test/cpp_headers/scsi_spec.o 00:02:58.333 CXX test/cpp_headers/sock.o 00:02:58.333 LINK aer 00:02:58.591 CXX test/cpp_headers/stdinc.o 00:02:58.591 CC test/nvme/reset/reset.o 00:02:58.849 CXX test/cpp_headers/string.o 00:02:58.849 LINK reset 00:02:58.849 CXX test/cpp_headers/thread.o 00:02:59.108 CXX test/cpp_headers/trace.o 00:02:59.366 CXX test/cpp_headers/trace_parser.o 00:02:59.366 CC examples/nvme/arbitration/arbitration.o 00:02:59.366 CXX test/cpp_headers/tree.o 00:02:59.625 CXX test/cpp_headers/ublk.o 00:02:59.625 CXX test/cpp_headers/util.o 00:02:59.625 LINK arbitration 00:02:59.883 CXX test/cpp_headers/uuid.o 00:03:00.142 CXX test/cpp_headers/version.o 00:03:00.142 CXX test/cpp_headers/vfio_user_pci.o 00:03:00.142 CC test/nvme/sgl/sgl.o 00:03:00.142 CXX test/cpp_headers/vfio_user_spec.o 00:03:00.400 CXX test/cpp_headers/vhost.o 00:03:00.400 LINK sgl 00:03:00.400 CC test/rpc_client/rpc_client_test.o 00:03:00.658 CC test/thread/poller_perf/poller_perf.o 00:03:00.658 CXX test/cpp_headers/vmd.o 00:03:00.658 CXX test/cpp_headers/xor.o 00:03:00.658 LINK rpc_client_test 00:03:00.658 LINK poller_perf 00:03:00.917 CXX test/cpp_headers/zipf.o 00:03:01.176 CC test/nvme/e2edp/nvme_dp.o 00:03:01.434 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:01.434 CC test/nvme/overhead/overhead.o 00:03:01.434 LINK nvme_dp 00:03:01.434 CC test/nvme/err_injection/err_injection.o 00:03:01.434 LINK esnap 00:03:01.692 CC examples/nvme/hotplug/hotplug.o 00:03:01.692 LINK histogram_ut 00:03:01.692 LINK err_injection 00:03:01.692 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:01.950 LINK hotplug 00:03:01.950 CC test/thread/lock/spdk_lock.o 00:03:01.950 LINK overhead 00:03:01.950 CC app/spdk_lspci/spdk_lspci.o 00:03:01.950 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:02.208 LINK spdk_lspci 00:03:02.208 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:02.467 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:02.726 LINK scsi_nvme_ut 00:03:03.292 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:03.292 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:03.292 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:03.549 CC test/nvme/startup/startup.o 00:03:03.807 LINK startup 00:03:03.807 LINK cmb_copy 00:03:03.807 LINK blob_bdev_ut 00:03:03.807 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:04.065 CC app/spdk_nvme_perf/perf.o 00:03:04.324 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:04.324 LINK gpt_ut 00:03:04.581 LINK accel_ut 00:03:04.581 LINK spdk_lock 00:03:04.847 CC test/nvme/reserve/reserve.o 00:03:04.847 CC test/nvme/simple_copy/simple_copy.o 00:03:05.130 LINK spdk_nvme_perf 00:03:05.130 LINK reserve 00:03:05.130 CC examples/nvme/abort/abort.o 00:03:05.388 LINK simple_copy 00:03:05.388 LINK vbdev_lvol_ut 00:03:05.647 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:05.647 LINK abort 00:03:05.906 LINK tree_ut 00:03:05.906 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:05.906 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:06.473 LINK part_ut 00:03:06.473 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:06.732 LINK dma_ut 00:03:06.989 CC test/unit/lib/event/app.c/app_ut.o 00:03:06.989 CC app/spdk_nvme_identify/identify.o 00:03:07.248 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:07.248 CC test/nvme/connect_stress/connect_stress.o 00:03:07.248 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:07.506 LINK connect_stress 00:03:07.506 LINK blobfs_sync_ut 00:03:07.506 LINK ioat_ut 00:03:07.765 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:08.024 LINK app_ut 00:03:08.024 LINK blobfs_async_ut 00:03:08.024 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:08.024 LINK pmr_persistence 00:03:08.282 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:08.282 LINK spdk_nvme_identify 00:03:08.540 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:08.540 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:08.540 LINK bdev_zone_ut 00:03:08.798 LINK blobfs_bdev_ut 00:03:08.798 LINK bdev_ut 00:03:09.058 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:09.058 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:09.316 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:09.316 LINK reactor_ut 00:03:09.575 LINK bdev_raid_sb_ut 00:03:09.575 CC test/nvme/boot_partition/boot_partition.o 00:03:09.575 LINK concat_ut 00:03:09.834 LINK boot_partition 00:03:09.834 LINK raid1_ut 00:03:09.834 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:10.093 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:10.093 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:10.093 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.352 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:10.352 LINK spdk_nvme_discover 00:03:10.352 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:10.352 LINK bdev_raid_ut 00:03:10.352 LINK init_grp_ut 00:03:10.970 CC test/unit/lib/log/log.c/log_ut.o 00:03:10.970 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:10.970 LINK jsonrpc_server_ut 00:03:10.970 LINK log_ut 00:03:11.229 LINK conn_ut 00:03:11.229 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:11.488 LINK json_util_ut 00:03:11.488 CC app/spdk_top/spdk_top.o 00:03:11.488 CC test/nvme/compliance/nvme_compliance.o 00:03:11.746 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.746 LINK bdev_ut 00:03:11.746 CC app/vhost/vhost.o 00:03:11.746 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:12.004 LINK nvme_compliance 00:03:12.004 LINK fused_ordering 00:03:12.004 LINK vhost 00:03:12.004 LINK blob_ut 00:03:12.004 LINK json_write_ut 00:03:12.263 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:12.522 LINK param_ut 00:03:12.522 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:12.522 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:12.522 LINK spdk_top 00:03:12.779 LINK iscsi_ut 00:03:12.779 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:13.037 LINK doorbell_aers 00:03:13.037 LINK portal_grp_ut 00:03:13.296 LINK vbdev_zone_block_ut 00:03:13.296 CC test/nvme/fdp/fdp.o 00:03:13.296 LINK tgt_node_ut 00:03:13.296 CC test/nvme/cuse/cuse.o 00:03:13.555 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:13.555 LINK json_parse_ut 00:03:13.555 LINK fdp 00:03:13.813 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:13.813 CC app/spdk_dd/spdk_dd.o 00:03:13.813 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:13.813 CC app/fio/nvme/fio_plugin.o 00:03:14.070 CC app/fio/bdev/fio_plugin.o 00:03:14.070 LINK notify_ut 00:03:14.329 LINK spdk_dd 00:03:14.329 LINK cuse 00:03:14.589 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:14.589 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:14.589 LINK spdk_bdev 00:03:14.589 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:14.851 LINK spdk_nvme 00:03:15.111 LINK dev_ut 00:03:15.375 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:15.375 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:15.633 LINK lvol_ut 00:03:15.633 LINK scsi_ut 00:03:15.892 LINK nvme_ut 00:03:15.892 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:16.150 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:16.150 LINK lun_ut 00:03:16.150 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:16.408 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:16.408 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:16.666 LINK scsi_pr_ut 00:03:16.924 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:17.183 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:17.183 LINK scsi_bdev_ut 00:03:17.749 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:17.749 LINK posix_ut 00:03:17.749 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:17.749 LINK base64_ut 00:03:18.007 LINK sock_ut 00:03:18.007 LINK iobuf_ut 00:03:18.264 LINK bit_array_ut 00:03:18.264 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:18.264 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:18.264 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:18.264 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:18.264 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:18.264 LINK crc16_ut 00:03:18.522 LINK cpuset_ut 00:03:18.522 LINK crc32_ieee_ut 00:03:18.522 LINK crc32c_ut 00:03:18.522 LINK bdev_nvme_ut 00:03:18.522 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:18.522 LINK crc64_ut 00:03:18.781 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:18.781 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:18.781 CC test/unit/lib/util/math.c/math_ut.o 00:03:18.781 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:18.781 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:18.781 LINK tcp_ut 00:03:19.039 LINK math_ut 00:03:19.039 LINK iov_ut 00:03:19.039 CC test/unit/lib/util/string.c/string_ut.o 00:03:19.039 LINK pci_event_ut 00:03:19.297 LINK thread_ut 00:03:19.297 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:19.297 LINK subsystem_ut 00:03:19.297 LINK pipe_ut 00:03:19.297 LINK string_ut 00:03:19.297 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:19.297 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:19.297 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:19.588 LINK dif_ut 00:03:19.588 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:19.588 LINK xor_ut 00:03:19.588 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:19.588 LINK nvme_ctrlr_ut 00:03:19.588 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:19.588 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:19.875 LINK rpc_ut 00:03:19.875 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:19.876 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:19.876 LINK idxd_user_ut 00:03:20.134 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:20.134 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:20.134 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:20.393 LINK idxd_ut 00:03:20.651 LINK common_ut 00:03:20.651 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:20.651 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:20.909 LINK nvme_ns_ut 00:03:20.909 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:21.168 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:21.168 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:21.168 LINK nvme_ctrlr_cmd_ut 00:03:21.168 LINK ftl_l2p_ut 00:03:21.426 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:21.684 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:21.684 LINK vhost_ut 00:03:21.684 LINK nvme_ns_ocssd_cmd_ut 00:03:21.942 LINK subsystem_ut 00:03:21.942 LINK nvme_ns_cmd_ut 00:03:21.942 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:22.201 LINK ctrlr_bdev_ut 00:03:22.201 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:22.201 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:22.460 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:22.460 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:22.460 LINK nvme_poll_group_ut 00:03:22.742 LINK ctrlr_ut 00:03:22.742 LINK ftl_band_ut 00:03:22.742 LINK ctrlr_discovery_ut 00:03:22.742 LINK nvme_pcie_ut 00:03:22.742 LINK nvme_quirks_ut 00:03:23.000 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:23.001 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:23.001 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:23.001 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:23.259 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:23.259 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:23.259 LINK ftl_bitmap_ut 00:03:23.259 LINK nvme_qpair_ut 00:03:23.517 LINK nvmf_ut 00:03:23.517 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:23.776 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:24.034 LINK ftl_io_ut 00:03:24.034 LINK ftl_mempool_ut 00:03:24.034 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:24.292 LINK nvme_transport_ut 00:03:24.292 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:24.292 LINK nvme_io_msg_ut 00:03:24.550 LINK ftl_mngt_ut 00:03:24.550 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:24.550 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:24.550 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:24.808 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:24.808 LINK nvme_pcie_common_ut 00:03:25.066 LINK nvme_tcp_ut 00:03:25.325 LINK nvme_fabric_ut 00:03:25.325 LINK nvme_opal_ut 00:03:25.583 LINK ftl_sb_ut 00:03:25.840 LINK ftl_layout_upgrade_ut 00:03:26.407 LINK rdma_ut 00:03:26.407 LINK nvme_cuse_ut 00:03:26.665 LINK transport_ut 00:03:26.665 LINK nvme_rdma_ut 00:03:27.232 00:03:27.232 real 2m10.320s 00:03:27.232 user 10m54.493s 00:03:27.232 sys 2m39.557s 00:03:27.232 05:55:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:27.232 05:55:57 -- common/autotest_common.sh@10 -- $ set +x 00:03:27.232 ************************************ 00:03:27.232 END TEST unittest_build 00:03:27.232 ************************************ 00:03:27.232 05:55:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:27.232 05:55:57 -- nvmf/common.sh@7 -- # uname -s 00:03:27.232 05:55:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:27.232 05:55:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:27.232 05:55:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:27.232 05:55:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:27.232 05:55:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:27.232 05:55:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:27.232 05:55:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:27.232 05:55:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:27.232 05:55:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:27.232 05:55:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:27.232 05:55:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cc355ec5-2386-4352-a9aa-687d165baeef 00:03:27.232 05:55:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=cc355ec5-2386-4352-a9aa-687d165baeef 00:03:27.232 05:55:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:27.232 05:55:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:27.232 05:55:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:27.232 05:55:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:27.232 05:55:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:27.232 05:55:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:27.232 05:55:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:27.232 05:55:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:27.232 05:55:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:27.232 05:55:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:27.232 05:55:57 -- paths/export.sh@5 -- # export PATH 00:03:27.232 05:55:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:27.232 05:55:57 -- nvmf/common.sh@46 -- # : 0 00:03:27.232 05:55:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:27.232 05:55:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:27.233 05:55:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:27.233 05:55:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:27.233 05:55:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:27.233 05:55:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:27.233 05:55:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:27.233 05:55:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:27.233 05:55:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:27.233 05:55:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:27.233 05:55:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:27.233 05:55:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:27.233 05:55:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.233 05:55:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:27.233 05:55:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:27.233 05:55:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:27.233 05:55:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:27.233 05:55:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:27.233 05:55:57 -- spdk/autotest.sh@48 -- # udevadm_pid=92438 00:03:27.233 05:55:57 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:27.233 05:55:57 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:27.233 05:55:57 -- spdk/autotest.sh@54 -- # echo 92441 00:03:27.233 05:55:57 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:27.233 05:55:57 -- spdk/autotest.sh@56 -- # echo 92442 00:03:27.233 05:55:57 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:27.233 05:55:57 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:27.233 05:55:57 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:27.233 05:55:57 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:27.233 05:55:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:27.233 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:03:27.233 05:55:57 -- spdk/autotest.sh@70 -- # create_test_list 00:03:27.233 05:55:57 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:27.233 05:55:57 -- common/autotest_common.sh@10 -- # set +x 00:03:27.492 05:55:57 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:27.492 05:55:57 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:27.492 05:55:57 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:27.492 05:55:57 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:27.492 05:55:57 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:27.492 05:55:57 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:27.492 05:55:57 -- common/autotest_common.sh@1440 -- # uname 00:03:27.492 05:55:57 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:27.492 05:55:57 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:27.492 05:55:57 -- common/autotest_common.sh@1460 -- # uname 00:03:27.492 05:55:57 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:27.492 05:55:57 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:27.492 05:55:57 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:27.492 05:55:57 -- spdk/autotest.sh@83 -- # hash lcov 00:03:27.492 05:55:57 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:27.492 05:55:57 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:27.492 --rc lcov_branch_coverage=1 00:03:27.492 --rc lcov_function_coverage=1 00:03:27.492 --rc genhtml_branch_coverage=1 00:03:27.492 --rc genhtml_function_coverage=1 00:03:27.492 --rc genhtml_legend=1 00:03:27.492 --rc geninfo_all_blocks=1 00:03:27.492 ' 00:03:27.492 05:55:57 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:27.492 --rc lcov_branch_coverage=1 00:03:27.492 --rc lcov_function_coverage=1 00:03:27.492 --rc genhtml_branch_coverage=1 00:03:27.492 --rc genhtml_function_coverage=1 00:03:27.492 --rc genhtml_legend=1 00:03:27.492 --rc geninfo_all_blocks=1 00:03:27.492 ' 00:03:27.492 05:55:57 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:27.492 --rc lcov_branch_coverage=1 00:03:27.492 --rc lcov_function_coverage=1 00:03:27.492 --rc genhtml_branch_coverage=1 00:03:27.492 --rc genhtml_function_coverage=1 00:03:27.492 --rc genhtml_legend=1 00:03:27.492 --rc geninfo_all_blocks=1 00:03:27.492 --no-external' 00:03:27.492 05:55:57 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:27.492 --rc lcov_branch_coverage=1 00:03:27.492 --rc lcov_function_coverage=1 00:03:27.492 --rc genhtml_branch_coverage=1 00:03:27.492 --rc genhtml_function_coverage=1 00:03:27.492 --rc genhtml_legend=1 00:03:27.492 --rc geninfo_all_blocks=1 00:03:27.492 --no-external' 00:03:27.492 05:55:57 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:27.492 lcov: LCOV version 1.15 00:03:27.492 05:55:57 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.585 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:45.585 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:45.585 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:45.585 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:45.585 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:45.585 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:12.164 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:12.164 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:12.165 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:12.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:12.165 05:56:42 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:12.165 05:56:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:12.165 05:56:42 -- common/autotest_common.sh@10 -- # set +x 00:04:12.165 05:56:42 -- spdk/autotest.sh@102 -- # rm -f 00:04:12.165 05:56:42 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.456 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:12.456 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:12.714 05:56:43 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:12.714 05:56:43 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:12.714 05:56:43 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:12.714 05:56:43 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:12.714 05:56:43 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:12.714 05:56:43 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:12.714 05:56:43 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:12.714 05:56:43 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:12.714 05:56:43 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:12.714 05:56:43 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:12.714 05:56:43 -- spdk/autotest.sh@121 -- # grep -v p 00:04:12.714 05:56:43 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:12.714 05:56:43 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:12.714 05:56:43 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:12.714 05:56:43 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:12.714 05:56:43 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:12.714 05:56:43 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:12.714 No valid GPT data, bailing 00:04:12.714 05:56:43 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:12.714 05:56:43 -- scripts/common.sh@393 -- # pt= 00:04:12.714 05:56:43 -- scripts/common.sh@394 -- # return 1 00:04:12.714 05:56:43 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:12.714 1+0 records in 00:04:12.714 1+0 records out 00:04:12.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00486406 s, 216 MB/s 00:04:12.714 05:56:43 -- spdk/autotest.sh@129 -- # sync 00:04:12.714 05:56:43 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.714 05:56:43 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.714 05:56:43 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.614 05:56:44 -- spdk/autotest.sh@135 -- # uname -s 00:04:14.614 05:56:44 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:14.614 05:56:44 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.614 05:56:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.614 05:56:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.614 05:56:44 -- common/autotest_common.sh@10 -- # set +x 00:04:14.614 ************************************ 00:04:14.614 START TEST setup.sh 00:04:14.614 ************************************ 00:04:14.614 05:56:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.614 * Looking for test storage... 00:04:14.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.614 05:56:45 -- setup/test-setup.sh@10 -- # uname -s 00:04:14.614 05:56:45 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:14.614 05:56:45 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:14.614 05:56:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.614 05:56:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.614 05:56:45 -- common/autotest_common.sh@10 -- # set +x 00:04:14.614 ************************************ 00:04:14.614 START TEST acl 00:04:14.614 ************************************ 00:04:14.614 05:56:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:14.614 * Looking for test storage... 00:04:14.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:14.614 05:56:45 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:14.614 05:56:45 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:14.614 05:56:45 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:14.614 05:56:45 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:14.614 05:56:45 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:14.614 05:56:45 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:14.614 05:56:45 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:14.614 05:56:45 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:14.614 05:56:45 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:14.614 05:56:45 -- setup/acl.sh@12 -- # devs=() 00:04:14.614 05:56:45 -- setup/acl.sh@12 -- # declare -a devs 00:04:14.614 05:56:45 -- setup/acl.sh@13 -- # drivers=() 00:04:14.614 05:56:45 -- setup/acl.sh@13 -- # declare -A drivers 00:04:14.614 05:56:45 -- setup/acl.sh@51 -- # setup reset 00:04:14.614 05:56:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.614 05:56:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.182 05:56:45 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:15.182 05:56:45 -- setup/acl.sh@16 -- # local dev driver 00:04:15.182 05:56:45 -- setup/acl.sh@15 -- # setup output status 00:04:15.182 05:56:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.182 05:56:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.182 05:56:45 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:15.441 Hugepages 00:04:15.441 node hugesize free / total 00:04:15.441 05:56:45 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.441 05:56:45 -- setup/acl.sh@19 -- # continue 00:04:15.441 05:56:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.441 00:04:15.441 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.441 05:56:45 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.441 05:56:45 -- setup/acl.sh@19 -- # continue 00:04:15.441 05:56:45 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.441 05:56:46 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:15.441 05:56:46 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:15.441 05:56:46 -- setup/acl.sh@20 -- # continue 00:04:15.441 05:56:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.700 05:56:46 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:15.700 05:56:46 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:15.700 05:56:46 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:15.700 05:56:46 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:15.700 05:56:46 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:15.700 05:56:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.700 05:56:46 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:15.700 05:56:46 -- setup/acl.sh@54 -- # run_test denied denied 00:04:15.700 05:56:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.700 05:56:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.700 05:56:46 -- common/autotest_common.sh@10 -- # set +x 00:04:15.700 ************************************ 00:04:15.700 START TEST denied 00:04:15.700 ************************************ 00:04:15.700 05:56:46 -- common/autotest_common.sh@1104 -- # denied 00:04:15.700 05:56:46 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:15.700 05:56:46 -- setup/acl.sh@38 -- # setup output config 00:04:15.700 05:56:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.700 05:56:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:15.700 05:56:46 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:18.239 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:18.239 05:56:48 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:18.239 05:56:48 -- setup/acl.sh@28 -- # local dev driver 00:04:18.239 05:56:48 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:18.239 05:56:48 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:18.239 05:56:48 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:18.239 05:56:48 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:18.239 05:56:48 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:18.239 05:56:48 -- setup/acl.sh@41 -- # setup reset 00:04:18.239 05:56:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.239 05:56:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.498 00:04:18.498 real 0m2.787s 00:04:18.498 user 0m0.539s 00:04:18.498 sys 0m2.311s 00:04:18.498 05:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.498 05:56:48 -- common/autotest_common.sh@10 -- # set +x 00:04:18.498 ************************************ 00:04:18.498 END TEST denied 00:04:18.498 ************************************ 00:04:18.498 05:56:48 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:18.498 05:56:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.498 05:56:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.498 05:56:48 -- common/autotest_common.sh@10 -- # set +x 00:04:18.498 ************************************ 00:04:18.498 START TEST allowed 00:04:18.498 ************************************ 00:04:18.498 05:56:49 -- common/autotest_common.sh@1104 -- # allowed 00:04:18.498 05:56:49 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:18.498 05:56:49 -- setup/acl.sh@45 -- # setup output config 00:04:18.498 05:56:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.498 05:56:49 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:18.498 05:56:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:20.401 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.401 05:56:50 -- setup/acl.sh@47 -- # verify 00:04:20.401 05:56:50 -- setup/acl.sh@28 -- # local dev driver 00:04:20.401 05:56:50 -- setup/acl.sh@48 -- # setup reset 00:04:20.401 05:56:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:20.401 05:56:50 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.659 00:04:20.659 real 0m2.145s 00:04:20.659 user 0m0.449s 00:04:20.659 sys 0m1.702s 00:04:20.659 ************************************ 00:04:20.659 END TEST allowed 00:04:20.659 ************************************ 00:04:20.659 05:56:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.659 05:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.659 00:04:20.659 real 0m6.163s 00:04:20.659 user 0m1.565s 00:04:20.659 sys 0m4.753s 00:04:20.659 05:56:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.659 05:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.659 ************************************ 00:04:20.659 END TEST acl 00:04:20.659 ************************************ 00:04:20.660 05:56:51 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:20.660 05:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.660 05:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.660 05:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.660 ************************************ 00:04:20.660 START TEST hugepages 00:04:20.660 ************************************ 00:04:20.660 05:56:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:20.919 * Looking for test storage... 00:04:20.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.919 05:56:51 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:20.919 05:56:51 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:20.919 05:56:51 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:20.919 05:56:51 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:20.919 05:56:51 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:20.919 05:56:51 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:20.919 05:56:51 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:20.919 05:56:51 -- setup/common.sh@18 -- # local node= 00:04:20.919 05:56:51 -- setup/common.sh@19 -- # local var val 00:04:20.919 05:56:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.919 05:56:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.919 05:56:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.919 05:56:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.919 05:56:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.919 05:56:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 2985680 kB' 'MemAvailable: 7398928 kB' 'Buffers: 35344 kB' 'Cached: 4516508 kB' 'SwapCached: 0 kB' 'Active: 994252 kB' 'Inactive: 3677152 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 130164 kB' 'Active(file): 993196 kB' 'Inactive(file): 3546988 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 148876 kB' 'Mapped: 68260 kB' 'Shmem: 2600 kB' 'KReclaimable: 194368 kB' 'Slab: 259440 kB' 'SReclaimable: 194368 kB' 'SUnreclaim: 65072 kB' 'KernelStack: 4532 kB' 'PageTables: 3900 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024336 kB' 'Committed_AS: 496228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.919 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.919 05:56:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.920 05:56:51 -- setup/common.sh@32 -- # continue 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.920 05:56:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.921 05:56:51 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.921 05:56:51 -- setup/common.sh@33 -- # echo 2048 00:04:20.921 05:56:51 -- setup/common.sh@33 -- # return 0 00:04:20.921 05:56:51 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:20.921 05:56:51 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:20.921 05:56:51 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:20.921 05:56:51 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:20.921 05:56:51 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:20.921 05:56:51 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:20.921 05:56:51 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:20.921 05:56:51 -- setup/hugepages.sh@207 -- # get_nodes 00:04:20.921 05:56:51 -- setup/hugepages.sh@27 -- # local node 00:04:20.921 05:56:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.921 05:56:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:20.921 05:56:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.921 05:56:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.921 05:56:51 -- setup/hugepages.sh@208 -- # clear_hp 00:04:20.921 05:56:51 -- setup/hugepages.sh@37 -- # local node hp 00:04:20.921 05:56:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.921 05:56:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.921 05:56:51 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.921 05:56:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.921 05:56:51 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.921 05:56:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.921 05:56:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.921 05:56:51 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:20.921 05:56:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.921 05:56:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.921 05:56:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.921 ************************************ 00:04:20.921 START TEST default_setup 00:04:20.921 ************************************ 00:04:20.921 05:56:51 -- common/autotest_common.sh@1104 -- # default_setup 00:04:20.921 05:56:51 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:20.921 05:56:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.921 05:56:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.921 05:56:51 -- setup/hugepages.sh@51 -- # shift 00:04:20.921 05:56:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:20.921 05:56:51 -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.921 05:56:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.921 05:56:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.921 05:56:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.921 05:56:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:20.921 05:56:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.921 05:56:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.921 05:56:51 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.921 05:56:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.921 05:56:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.921 05:56:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.921 05:56:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.921 05:56:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:20.921 05:56:51 -- setup/hugepages.sh@73 -- # return 0 00:04:20.921 05:56:51 -- setup/hugepages.sh@137 -- # setup output 00:04:20.921 05:56:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.921 05:56:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:21.489 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.423 05:56:52 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.423 05:56:52 -- setup/hugepages.sh@89 -- # local node 00:04:22.423 05:56:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.423 05:56:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.423 05:56:52 -- setup/hugepages.sh@92 -- # local surp 00:04:22.423 05:56:52 -- setup/hugepages.sh@93 -- # local resv 00:04:22.423 05:56:52 -- setup/hugepages.sh@94 -- # local anon 00:04:22.423 05:56:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.423 05:56:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.423 05:56:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.423 05:56:52 -- setup/common.sh@18 -- # local node= 00:04:22.423 05:56:52 -- setup/common.sh@19 -- # local var val 00:04:22.423 05:56:52 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.423 05:56:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.423 05:56:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.423 05:56:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.423 05:56:52 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.423 05:56:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.423 05:56:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5072080 kB' 'MemAvailable: 9485336 kB' 'Buffers: 35344 kB' 'Cached: 4516508 kB' 'SwapCached: 0 kB' 'Active: 994244 kB' 'Inactive: 3691060 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 144076 kB' 'Active(file): 993204 kB' 'Inactive(file): 3546984 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 162384 kB' 'Mapped: 68220 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258992 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64620 kB' 'KernelStack: 4496 kB' 'PageTables: 3724 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:22.423 05:56:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:52 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:52 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:52 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.423 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.423 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.424 05:56:53 -- setup/common.sh@33 -- # echo 0 00:04:22.424 05:56:53 -- setup/common.sh@33 -- # return 0 00:04:22.424 05:56:53 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.424 05:56:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.424 05:56:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.424 05:56:53 -- setup/common.sh@18 -- # local node= 00:04:22.424 05:56:53 -- setup/common.sh@19 -- # local var val 00:04:22.424 05:56:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.424 05:56:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.424 05:56:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.424 05:56:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.424 05:56:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.424 05:56:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5072080 kB' 'MemAvailable: 9485336 kB' 'Buffers: 35344 kB' 'Cached: 4516508 kB' 'SwapCached: 0 kB' 'Active: 994244 kB' 'Inactive: 3691060 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 144076 kB' 'Active(file): 993204 kB' 'Inactive(file): 3546984 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 162384 kB' 'Mapped: 68220 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258992 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64620 kB' 'KernelStack: 4496 kB' 'PageTables: 3724 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.424 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.424 05:56:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.425 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.425 05:56:53 -- setup/common.sh@33 -- # echo 0 00:04:22.425 05:56:53 -- setup/common.sh@33 -- # return 0 00:04:22.425 05:56:53 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.425 05:56:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.425 05:56:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.425 05:56:53 -- setup/common.sh@18 -- # local node= 00:04:22.425 05:56:53 -- setup/common.sh@19 -- # local var val 00:04:22.425 05:56:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.425 05:56:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.425 05:56:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.425 05:56:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.425 05:56:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.425 05:56:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.425 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5072080 kB' 'MemAvailable: 9485340 kB' 'Buffers: 35344 kB' 'Cached: 4516508 kB' 'SwapCached: 0 kB' 'Active: 994244 kB' 'Inactive: 3690984 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 143996 kB' 'Active(file): 993204 kB' 'Inactive(file): 3546988 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 162300 kB' 'Mapped: 68180 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258992 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64620 kB' 'KernelStack: 4480 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19660 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.426 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.426 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.686 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.686 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.686 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.686 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.686 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.686 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.686 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.687 05:56:53 -- setup/common.sh@33 -- # echo 0 00:04:22.687 05:56:53 -- setup/common.sh@33 -- # return 0 00:04:22.687 05:56:53 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.687 05:56:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.687 nr_hugepages=1024 00:04:22.687 resv_hugepages=0 00:04:22.687 05:56:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.687 surplus_hugepages=0 00:04:22.687 05:56:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.687 anon_hugepages=0 00:04:22.687 05:56:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.687 05:56:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.687 05:56:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.687 05:56:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.687 05:56:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.687 05:56:53 -- setup/common.sh@18 -- # local node= 00:04:22.687 05:56:53 -- setup/common.sh@19 -- # local var val 00:04:22.687 05:56:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.687 05:56:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.687 05:56:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.687 05:56:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.687 05:56:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.687 05:56:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5072068 kB' 'MemAvailable: 9485328 kB' 'Buffers: 35344 kB' 'Cached: 4516508 kB' 'SwapCached: 0 kB' 'Active: 994240 kB' 'Inactive: 3690968 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 143980 kB' 'Active(file): 993204 kB' 'Inactive(file): 3546988 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 162272 kB' 'Mapped: 68180 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258992 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64620 kB' 'KernelStack: 4500 kB' 'PageTables: 3820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 509020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.687 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.687 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.688 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.688 05:56:53 -- setup/common.sh@33 -- # echo 1024 00:04:22.688 05:56:53 -- setup/common.sh@33 -- # return 0 00:04:22.688 05:56:53 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.688 05:56:53 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.688 05:56:53 -- setup/hugepages.sh@27 -- # local node 00:04:22.688 05:56:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.688 05:56:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.688 05:56:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.688 05:56:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.688 05:56:53 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.688 05:56:53 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.688 05:56:53 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.688 05:56:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.688 05:56:53 -- setup/common.sh@18 -- # local node=0 00:04:22.688 05:56:53 -- setup/common.sh@19 -- # local var val 00:04:22.688 05:56:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.688 05:56:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.688 05:56:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.688 05:56:53 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.688 05:56:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.688 05:56:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.688 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5072316 kB' 'MemUsed: 7170664 kB' 'SwapCached: 0 kB' 'Active: 994248 kB' 'Inactive: 3690388 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 143396 kB' 'Active(file): 993204 kB' 'Inactive(file): 3546992 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'FilePages: 4551856 kB' 'Mapped: 68220 kB' 'AnonPages: 162084 kB' 'Shmem: 2596 kB' 'KernelStack: 4464 kB' 'PageTables: 3640 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 258944 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # continue 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.689 05:56:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.689 05:56:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.689 05:56:53 -- setup/common.sh@33 -- # echo 0 00:04:22.689 05:56:53 -- setup/common.sh@33 -- # return 0 00:04:22.689 05:56:53 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.689 05:56:53 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.689 05:56:53 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.689 05:56:53 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.689 node0=1024 expecting 1024 00:04:22.689 05:56:53 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.689 05:56:53 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.689 00:04:22.689 real 0m1.718s 00:04:22.689 user 0m0.375s 00:04:22.689 sys 0m1.329s 00:04:22.689 05:56:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.689 05:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:22.689 ************************************ 00:04:22.689 END TEST default_setup 00:04:22.689 ************************************ 00:04:22.689 05:56:53 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.689 05:56:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:22.689 05:56:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:22.689 05:56:53 -- common/autotest_common.sh@10 -- # set +x 00:04:22.689 ************************************ 00:04:22.690 START TEST per_node_1G_alloc 00:04:22.690 ************************************ 00:04:22.690 05:56:53 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:22.690 05:56:53 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.690 05:56:53 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:22.690 05:56:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.690 05:56:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:22.690 05:56:53 -- setup/hugepages.sh@51 -- # shift 00:04:22.690 05:56:53 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:22.690 05:56:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.690 05:56:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.690 05:56:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.690 05:56:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:22.690 05:56:53 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:22.690 05:56:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.690 05:56:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.690 05:56:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.690 05:56:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.690 05:56:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.690 05:56:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:22.690 05:56:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.690 05:56:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.690 05:56:53 -- setup/hugepages.sh@73 -- # return 0 00:04:22.690 05:56:53 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.690 05:56:53 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:22.690 05:56:53 -- setup/hugepages.sh@146 -- # setup output 00:04:22.690 05:56:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.690 05:56:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:23.207 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.466 05:56:54 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:23.466 05:56:54 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:23.466 05:56:54 -- setup/hugepages.sh@89 -- # local node 00:04:23.466 05:56:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.466 05:56:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.466 05:56:54 -- setup/hugepages.sh@92 -- # local surp 00:04:23.466 05:56:54 -- setup/hugepages.sh@93 -- # local resv 00:04:23.466 05:56:54 -- setup/hugepages.sh@94 -- # local anon 00:04:23.466 05:56:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.466 05:56:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.466 05:56:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.466 05:56:54 -- setup/common.sh@18 -- # local node= 00:04:23.466 05:56:54 -- setup/common.sh@19 -- # local var val 00:04:23.466 05:56:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.466 05:56:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.466 05:56:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.466 05:56:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.466 05:56:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.466 05:56:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.466 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.466 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6117940 kB' 'MemAvailable: 10531204 kB' 'Buffers: 35352 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994288 kB' 'Inactive: 3690388 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143428 kB' 'Active(file): 993236 kB' 'Inactive(file): 3546960 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 724 kB' 'Writeback: 0 kB' 'AnonPages: 162356 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259400 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 65028 kB' 'KernelStack: 4416 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.467 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.467 05:56:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.730 05:56:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.730 05:56:54 -- setup/common.sh@33 -- # echo 0 00:04:23.730 05:56:54 -- setup/common.sh@33 -- # return 0 00:04:23.730 05:56:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.730 05:56:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.730 05:56:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.730 05:56:54 -- setup/common.sh@18 -- # local node= 00:04:23.730 05:56:54 -- setup/common.sh@19 -- # local var val 00:04:23.730 05:56:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.730 05:56:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.730 05:56:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.730 05:56:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.730 05:56:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.730 05:56:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.730 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.730 05:56:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6117940 kB' 'MemAvailable: 10531204 kB' 'Buffers: 35352 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994288 kB' 'Inactive: 3690388 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143428 kB' 'Active(file): 993236 kB' 'Inactive(file): 3546960 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 724 kB' 'Writeback: 0 kB' 'AnonPages: 162096 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259400 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 65028 kB' 'KernelStack: 4416 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.731 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.731 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.732 05:56:54 -- setup/common.sh@33 -- # echo 0 00:04:23.732 05:56:54 -- setup/common.sh@33 -- # return 0 00:04:23.732 05:56:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.732 05:56:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.732 05:56:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.732 05:56:54 -- setup/common.sh@18 -- # local node= 00:04:23.732 05:56:54 -- setup/common.sh@19 -- # local var val 00:04:23.732 05:56:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.732 05:56:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.732 05:56:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.732 05:56:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.732 05:56:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.732 05:56:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6117940 kB' 'MemAvailable: 10531204 kB' 'Buffers: 35352 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994288 kB' 'Inactive: 3690388 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143428 kB' 'Active(file): 993236 kB' 'Inactive(file): 3546960 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 724 kB' 'Writeback: 0 kB' 'AnonPages: 162356 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259400 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 65028 kB' 'KernelStack: 4416 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19676 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.732 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.732 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.733 05:56:54 -- setup/common.sh@33 -- # echo 0 00:04:23.733 05:56:54 -- setup/common.sh@33 -- # return 0 00:04:23.733 05:56:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.733 nr_hugepages=512 00:04:23.733 05:56:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:23.733 resv_hugepages=0 00:04:23.733 05:56:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.733 surplus_hugepages=0 00:04:23.733 05:56:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.733 anon_hugepages=0 00:04:23.733 05:56:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.733 05:56:54 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.733 05:56:54 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:23.733 05:56:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.733 05:56:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.733 05:56:54 -- setup/common.sh@18 -- # local node= 00:04:23.733 05:56:54 -- setup/common.sh@19 -- # local var val 00:04:23.733 05:56:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.733 05:56:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.733 05:56:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.733 05:56:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.733 05:56:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.733 05:56:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6118204 kB' 'MemAvailable: 10531468 kB' 'Buffers: 35352 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994288 kB' 'Inactive: 3690388 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143428 kB' 'Active(file): 993236 kB' 'Inactive(file): 3546960 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 724 kB' 'Writeback: 0 kB' 'AnonPages: 162096 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259400 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 65028 kB' 'KernelStack: 4484 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 509152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19692 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.733 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.733 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.734 05:56:54 -- setup/common.sh@33 -- # echo 512 00:04:23.734 05:56:54 -- setup/common.sh@33 -- # return 0 00:04:23.734 05:56:54 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.734 05:56:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.734 05:56:54 -- setup/hugepages.sh@27 -- # local node 00:04:23.734 05:56:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.734 05:56:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.734 05:56:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.734 05:56:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.734 05:56:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.734 05:56:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.734 05:56:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.734 05:56:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.734 05:56:54 -- setup/common.sh@18 -- # local node=0 00:04:23.734 05:56:54 -- setup/common.sh@19 -- # local var val 00:04:23.734 05:56:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.734 05:56:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.734 05:56:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.734 05:56:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.734 05:56:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.734 05:56:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6117960 kB' 'MemUsed: 6125020 kB' 'SwapCached: 0 kB' 'Active: 994288 kB' 'Inactive: 3690636 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 143676 kB' 'Active(file): 993236 kB' 'Inactive(file): 3546960 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 724 kB' 'Writeback: 0 kB' 'FilePages: 4551864 kB' 'Mapped: 68228 kB' 'AnonPages: 162344 kB' 'Shmem: 2596 kB' 'KernelStack: 4552 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 259400 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 65028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.734 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.734 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # continue 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.735 05:56:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.735 05:56:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.735 05:56:54 -- setup/common.sh@33 -- # echo 0 00:04:23.735 05:56:54 -- setup/common.sh@33 -- # return 0 00:04:23.735 05:56:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.735 05:56:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.735 05:56:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.735 05:56:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.735 node0=512 expecting 512 00:04:23.735 05:56:54 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.735 05:56:54 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.735 00:04:23.735 real 0m1.024s 00:04:23.736 user 0m0.326s 00:04:23.736 sys 0m0.744s 00:04:23.736 05:56:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.736 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:23.736 ************************************ 00:04:23.736 END TEST per_node_1G_alloc 00:04:23.736 ************************************ 00:04:23.736 05:56:54 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:23.736 05:56:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:23.736 05:56:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:23.736 05:56:54 -- common/autotest_common.sh@10 -- # set +x 00:04:23.736 ************************************ 00:04:23.736 START TEST even_2G_alloc 00:04:23.736 ************************************ 00:04:23.736 05:56:54 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:23.736 05:56:54 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:23.736 05:56:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.736 05:56:54 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.736 05:56:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.736 05:56:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.736 05:56:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.736 05:56:54 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:23.736 05:56:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.736 05:56:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.736 05:56:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.736 05:56:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.736 05:56:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.736 05:56:54 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.736 05:56:54 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.736 05:56:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.736 05:56:54 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:23.736 05:56:54 -- setup/hugepages.sh@83 -- # : 0 00:04:23.736 05:56:54 -- setup/hugepages.sh@84 -- # : 0 00:04:23.736 05:56:54 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.736 05:56:54 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:23.736 05:56:54 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:23.736 05:56:54 -- setup/hugepages.sh@153 -- # setup output 00:04:23.736 05:56:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.736 05:56:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:24.305 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:25.245 05:56:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:25.245 05:56:55 -- setup/hugepages.sh@89 -- # local node 00:04:25.245 05:56:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.245 05:56:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.245 05:56:55 -- setup/hugepages.sh@92 -- # local surp 00:04:25.245 05:56:55 -- setup/hugepages.sh@93 -- # local resv 00:04:25.245 05:56:55 -- setup/hugepages.sh@94 -- # local anon 00:04:25.245 05:56:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.245 05:56:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.245 05:56:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.245 05:56:55 -- setup/common.sh@18 -- # local node= 00:04:25.245 05:56:55 -- setup/common.sh@19 -- # local var val 00:04:25.245 05:56:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.245 05:56:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.245 05:56:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.245 05:56:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.245 05:56:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.245 05:56:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075684 kB' 'MemAvailable: 9488956 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994304 kB' 'Inactive: 3686788 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 139832 kB' 'Active(file): 993248 kB' 'Inactive(file): 3546956 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158484 kB' 'Mapped: 67712 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259132 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64760 kB' 'KernelStack: 4464 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19548 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.245 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.245 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.246 05:56:55 -- setup/common.sh@33 -- # echo 0 00:04:25.246 05:56:55 -- setup/common.sh@33 -- # return 0 00:04:25.246 05:56:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:25.246 05:56:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.246 05:56:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.246 05:56:55 -- setup/common.sh@18 -- # local node= 00:04:25.246 05:56:55 -- setup/common.sh@19 -- # local var val 00:04:25.246 05:56:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.246 05:56:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.246 05:56:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.246 05:56:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.246 05:56:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.246 05:56:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.246 05:56:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075684 kB' 'MemAvailable: 9488956 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994304 kB' 'Inactive: 3687060 kB' 'Active(anon): 1056 kB' 'Inactive(anon): 140104 kB' 'Active(file): 993248 kB' 'Inactive(file): 3546956 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158668 kB' 'Mapped: 67492 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259132 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64760 kB' 'KernelStack: 4448 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19564 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.246 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.246 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.247 05:56:55 -- setup/common.sh@33 -- # echo 0 00:04:25.247 05:56:55 -- setup/common.sh@33 -- # return 0 00:04:25.247 05:56:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:25.247 05:56:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.247 05:56:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.247 05:56:55 -- setup/common.sh@18 -- # local node= 00:04:25.247 05:56:55 -- setup/common.sh@19 -- # local var val 00:04:25.247 05:56:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.247 05:56:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.247 05:56:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.247 05:56:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.247 05:56:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.247 05:56:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075936 kB' 'MemAvailable: 9489208 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994292 kB' 'Inactive: 3686572 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139616 kB' 'Active(file): 993248 kB' 'Inactive(file): 3546956 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158188 kB' 'Mapped: 67440 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259124 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64752 kB' 'KernelStack: 4384 kB' 'PageTables: 3408 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.247 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.247 05:56:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.248 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.248 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.249 05:56:55 -- setup/common.sh@33 -- # echo 0 00:04:25.249 05:56:55 -- setup/common.sh@33 -- # return 0 00:04:25.249 05:56:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:25.249 05:56:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.249 nr_hugepages=1024 00:04:25.249 resv_hugepages=0 00:04:25.249 05:56:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.249 surplus_hugepages=0 00:04:25.249 05:56:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.249 anon_hugepages=0 00:04:25.249 05:56:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.249 05:56:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.249 05:56:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.249 05:56:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.249 05:56:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.249 05:56:55 -- setup/common.sh@18 -- # local node= 00:04:25.249 05:56:55 -- setup/common.sh@19 -- # local var val 00:04:25.249 05:56:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.249 05:56:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.249 05:56:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.249 05:56:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.249 05:56:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.249 05:56:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5076692 kB' 'MemAvailable: 9489964 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994292 kB' 'Inactive: 3686804 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139848 kB' 'Active(file): 993248 kB' 'Inactive(file): 3546956 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158420 kB' 'Mapped: 67440 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259124 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64752 kB' 'KernelStack: 4436 kB' 'PageTables: 3368 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.249 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.249 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.250 05:56:55 -- setup/common.sh@33 -- # echo 1024 00:04:25.250 05:56:55 -- setup/common.sh@33 -- # return 0 00:04:25.250 05:56:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.250 05:56:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.250 05:56:55 -- setup/hugepages.sh@27 -- # local node 00:04:25.250 05:56:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.250 05:56:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.250 05:56:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:25.250 05:56:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.250 05:56:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.250 05:56:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.250 05:56:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.250 05:56:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.250 05:56:55 -- setup/common.sh@18 -- # local node=0 00:04:25.250 05:56:55 -- setup/common.sh@19 -- # local var val 00:04:25.250 05:56:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:25.250 05:56:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.250 05:56:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.250 05:56:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.250 05:56:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.250 05:56:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.250 05:56:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5076944 kB' 'MemUsed: 7166036 kB' 'SwapCached: 0 kB' 'Active: 994292 kB' 'Inactive: 3686528 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139572 kB' 'Active(file): 993248 kB' 'Inactive(file): 3546956 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 4551868 kB' 'Mapped: 67440 kB' 'AnonPages: 158144 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 259124 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.250 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.250 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # continue 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:25.251 05:56:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:25.251 05:56:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.251 05:56:55 -- setup/common.sh@33 -- # echo 0 00:04:25.251 05:56:55 -- setup/common.sh@33 -- # return 0 00:04:25.251 05:56:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.251 05:56:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.251 05:56:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.251 node0=1024 expecting 1024 00:04:25.251 05:56:55 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.251 05:56:55 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.251 00:04:25.251 real 0m1.456s 00:04:25.251 user 0m0.346s 00:04:25.251 sys 0m1.174s 00:04:25.251 05:56:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.251 05:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.251 ************************************ 00:04:25.251 END TEST even_2G_alloc 00:04:25.251 ************************************ 00:04:25.251 05:56:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:25.251 05:56:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.251 05:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.251 05:56:55 -- common/autotest_common.sh@10 -- # set +x 00:04:25.251 ************************************ 00:04:25.251 START TEST odd_alloc 00:04:25.251 ************************************ 00:04:25.251 05:56:55 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:25.251 05:56:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:25.251 05:56:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:25.251 05:56:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:25.251 05:56:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.251 05:56:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.251 05:56:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.251 05:56:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:25.251 05:56:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.251 05:56:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.251 05:56:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.251 05:56:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:25.251 05:56:55 -- setup/hugepages.sh@83 -- # : 0 00:04:25.251 05:56:55 -- setup/hugepages.sh@84 -- # : 0 00:04:25.251 05:56:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.251 05:56:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:25.251 05:56:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:25.251 05:56:55 -- setup/hugepages.sh@160 -- # setup output 00:04:25.251 05:56:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.251 05:56:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:25.819 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.760 05:56:57 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:26.760 05:56:57 -- setup/hugepages.sh@89 -- # local node 00:04:26.760 05:56:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.760 05:56:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.760 05:56:57 -- setup/hugepages.sh@92 -- # local surp 00:04:26.760 05:56:57 -- setup/hugepages.sh@93 -- # local resv 00:04:26.760 05:56:57 -- setup/hugepages.sh@94 -- # local anon 00:04:26.760 05:56:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.760 05:56:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.760 05:56:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.760 05:56:57 -- setup/common.sh@18 -- # local node= 00:04:26.760 05:56:57 -- setup/common.sh@19 -- # local var val 00:04:26.760 05:56:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.760 05:56:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.760 05:56:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.760 05:56:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.760 05:56:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.760 05:56:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.760 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.760 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.760 05:56:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075200 kB' 'MemAvailable: 9488476 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3686640 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139708 kB' 'Active(file): 993276 kB' 'Inactive(file): 3546932 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158340 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258856 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64484 kB' 'KernelStack: 4400 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.761 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.761 05:56:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.761 05:56:57 -- setup/common.sh@33 -- # echo 0 00:04:26.761 05:56:57 -- setup/common.sh@33 -- # return 0 00:04:26.761 05:56:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.761 05:56:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.761 05:56:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.761 05:56:57 -- setup/common.sh@18 -- # local node= 00:04:26.761 05:56:57 -- setup/common.sh@19 -- # local var val 00:04:26.761 05:56:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.761 05:56:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.761 05:56:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.761 05:56:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.762 05:56:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.762 05:56:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075200 kB' 'MemAvailable: 9488476 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3686580 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139648 kB' 'Active(file): 993276 kB' 'Inactive(file): 3546932 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158280 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258856 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64484 kB' 'KernelStack: 4368 kB' 'PageTables: 3348 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.762 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.762 05:56:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.763 05:56:57 -- setup/common.sh@33 -- # echo 0 00:04:26.763 05:56:57 -- setup/common.sh@33 -- # return 0 00:04:26.763 05:56:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.763 05:56:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.763 05:56:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.763 05:56:57 -- setup/common.sh@18 -- # local node= 00:04:26.763 05:56:57 -- setup/common.sh@19 -- # local var val 00:04:26.763 05:56:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.763 05:56:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.763 05:56:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.763 05:56:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.763 05:56:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.763 05:56:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075200 kB' 'MemAvailable: 9488476 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3686428 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139496 kB' 'Active(file): 993276 kB' 'Inactive(file): 3546932 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158128 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258856 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64484 kB' 'KernelStack: 4352 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.763 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.763 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.764 05:56:57 -- setup/common.sh@33 -- # echo 0 00:04:26.764 05:56:57 -- setup/common.sh@33 -- # return 0 00:04:26.764 05:56:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.764 05:56:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:26.764 nr_hugepages=1025 00:04:26.764 resv_hugepages=0 00:04:26.764 05:56:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.764 surplus_hugepages=0 00:04:26.764 05:56:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.764 anon_hugepages=0 00:04:26.764 05:56:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.764 05:56:57 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.764 05:56:57 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:26.764 05:56:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.764 05:56:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.764 05:56:57 -- setup/common.sh@18 -- # local node= 00:04:26.764 05:56:57 -- setup/common.sh@19 -- # local var val 00:04:26.764 05:56:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.764 05:56:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.764 05:56:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.764 05:56:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.764 05:56:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.764 05:56:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075200 kB' 'MemAvailable: 9488476 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3686168 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139236 kB' 'Active(file): 993276 kB' 'Inactive(file): 3546932 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 157868 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258856 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64484 kB' 'KernelStack: 4420 kB' 'PageTables: 3308 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071888 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.764 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.764 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.765 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.765 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.766 05:56:57 -- setup/common.sh@33 -- # echo 1025 00:04:26.766 05:56:57 -- setup/common.sh@33 -- # return 0 00:04:26.766 05:56:57 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.766 05:56:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.766 05:56:57 -- setup/hugepages.sh@27 -- # local node 00:04:26.766 05:56:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.766 05:56:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:26.766 05:56:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.766 05:56:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.766 05:56:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.766 05:56:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.766 05:56:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.766 05:56:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.766 05:56:57 -- setup/common.sh@18 -- # local node=0 00:04:26.766 05:56:57 -- setup/common.sh@19 -- # local var val 00:04:26.766 05:56:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.766 05:56:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.766 05:56:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.766 05:56:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.766 05:56:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.766 05:56:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5075200 kB' 'MemUsed: 7167780 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3686168 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139236 kB' 'Active(file): 993276 kB' 'Inactive(file): 3546932 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 4551868 kB' 'Mapped: 67340 kB' 'AnonPages: 157868 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3568 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 258856 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.766 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.766 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # continue 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.767 05:56:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.767 05:56:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.767 05:56:57 -- setup/common.sh@33 -- # echo 0 00:04:26.767 05:56:57 -- setup/common.sh@33 -- # return 0 00:04:26.767 05:56:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.767 05:56:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.767 05:56:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.767 05:56:57 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:26.767 node0=1025 expecting 1025 00:04:26.767 05:56:57 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:26.767 00:04:26.767 real 0m1.407s 00:04:26.767 user 0m0.289s 00:04:26.767 sys 0m1.165s 00:04:26.767 05:56:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.767 05:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:26.767 ************************************ 00:04:26.767 END TEST odd_alloc 00:04:26.767 ************************************ 00:04:26.767 05:56:57 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:26.767 05:56:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.767 05:56:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.767 05:56:57 -- common/autotest_common.sh@10 -- # set +x 00:04:26.767 ************************************ 00:04:26.767 START TEST custom_alloc 00:04:26.767 ************************************ 00:04:26.767 05:56:57 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:26.767 05:56:57 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:26.767 05:56:57 -- setup/hugepages.sh@169 -- # local node 00:04:26.767 05:56:57 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:26.767 05:56:57 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:26.767 05:56:57 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:26.767 05:56:57 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:26.767 05:56:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.767 05:56:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.767 05:56:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.767 05:56:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.767 05:56:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.767 05:56:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.767 05:56:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.767 05:56:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.767 05:56:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.767 05:56:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.767 05:56:57 -- setup/hugepages.sh@83 -- # : 0 00:04:26.767 05:56:57 -- setup/hugepages.sh@84 -- # : 0 00:04:26.767 05:56:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:26.767 05:56:57 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.767 05:56:57 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.767 05:56:57 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:26.767 05:56:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.767 05:56:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.767 05:56:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.767 05:56:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:26.767 05:56:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.767 05:56:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.767 05:56:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:26.767 05:56:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.767 05:56:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.767 05:56:57 -- setup/hugepages.sh@78 -- # return 0 00:04:26.767 05:56:57 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:26.767 05:56:57 -- setup/hugepages.sh@187 -- # setup output 00:04:26.767 05:56:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.767 05:56:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.026 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:27.284 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:27.547 05:56:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:27.547 05:56:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:27.547 05:56:58 -- setup/hugepages.sh@89 -- # local node 00:04:27.547 05:56:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.547 05:56:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.547 05:56:58 -- setup/hugepages.sh@92 -- # local surp 00:04:27.547 05:56:58 -- setup/hugepages.sh@93 -- # local resv 00:04:27.547 05:56:58 -- setup/hugepages.sh@94 -- # local anon 00:04:27.547 05:56:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.547 05:56:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.547 05:56:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.547 05:56:58 -- setup/common.sh@18 -- # local node= 00:04:27.547 05:56:58 -- setup/common.sh@19 -- # local var val 00:04:27.547 05:56:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.547 05:56:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.547 05:56:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.547 05:56:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.547 05:56:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.547 05:56:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.547 05:56:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6126628 kB' 'MemAvailable: 10539904 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994328 kB' 'Inactive: 3686772 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 139848 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546924 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158480 kB' 'Mapped: 67352 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259340 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64968 kB' 'KernelStack: 4384 kB' 'PageTables: 3396 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.547 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.547 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.548 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.548 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.825 05:56:58 -- setup/common.sh@33 -- # echo 0 00:04:27.825 05:56:58 -- setup/common.sh@33 -- # return 0 00:04:27.825 05:56:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:27.825 05:56:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.825 05:56:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.825 05:56:58 -- setup/common.sh@18 -- # local node= 00:04:27.825 05:56:58 -- setup/common.sh@19 -- # local var val 00:04:27.825 05:56:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.825 05:56:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.825 05:56:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.825 05:56:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.825 05:56:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.825 05:56:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6126628 kB' 'MemAvailable: 10539904 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994332 kB' 'Inactive: 3686624 kB' 'Active(anon): 1048 kB' 'Inactive(anon): 139700 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546924 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158392 kB' 'Mapped: 67352 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259340 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64968 kB' 'KernelStack: 4400 kB' 'PageTables: 3436 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.825 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.825 05:56:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.826 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.826 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.827 05:56:58 -- setup/common.sh@33 -- # echo 0 00:04:27.827 05:56:58 -- setup/common.sh@33 -- # return 0 00:04:27.827 05:56:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:27.827 05:56:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.827 05:56:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.827 05:56:58 -- setup/common.sh@18 -- # local node= 00:04:27.827 05:56:58 -- setup/common.sh@19 -- # local var val 00:04:27.827 05:56:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.827 05:56:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.827 05:56:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.827 05:56:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.827 05:56:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.827 05:56:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6126880 kB' 'MemAvailable: 10540156 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994324 kB' 'Inactive: 3686744 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139820 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546924 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158408 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259356 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64984 kB' 'KernelStack: 4352 kB' 'PageTables: 3304 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19580 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.827 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.827 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.828 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.828 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.829 05:56:58 -- setup/common.sh@33 -- # echo 0 00:04:27.829 05:56:58 -- setup/common.sh@33 -- # return 0 00:04:27.829 05:56:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:27.829 05:56:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:27.829 nr_hugepages=512 00:04:27.829 resv_hugepages=0 00:04:27.829 05:56:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.829 surplus_hugepages=0 00:04:27.829 05:56:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.829 anon_hugepages=0 00:04:27.829 05:56:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.829 05:56:58 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.829 05:56:58 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:27.829 05:56:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.829 05:56:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.829 05:56:58 -- setup/common.sh@18 -- # local node= 00:04:27.829 05:56:58 -- setup/common.sh@19 -- # local var val 00:04:27.829 05:56:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.829 05:56:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.829 05:56:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.829 05:56:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.829 05:56:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.829 05:56:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6126880 kB' 'MemAvailable: 10540156 kB' 'Buffers: 35352 kB' 'Cached: 4516516 kB' 'SwapCached: 0 kB' 'Active: 994324 kB' 'Inactive: 3686576 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139652 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546924 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 158240 kB' 'Mapped: 67340 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259356 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64984 kB' 'KernelStack: 4404 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597200 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19596 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.829 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.829 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.830 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.830 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.831 05:56:58 -- setup/common.sh@33 -- # echo 512 00:04:27.831 05:56:58 -- setup/common.sh@33 -- # return 0 00:04:27.831 05:56:58 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:27.831 05:56:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.831 05:56:58 -- setup/hugepages.sh@27 -- # local node 00:04:27.831 05:56:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.831 05:56:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.831 05:56:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:27.831 05:56:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.831 05:56:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.831 05:56:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.831 05:56:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.831 05:56:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.831 05:56:58 -- setup/common.sh@18 -- # local node=0 00:04:27.831 05:56:58 -- setup/common.sh@19 -- # local var val 00:04:27.831 05:56:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:27.831 05:56:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.831 05:56:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.831 05:56:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.831 05:56:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.831 05:56:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 6126880 kB' 'MemUsed: 6116100 kB' 'SwapCached: 0 kB' 'Active: 994324 kB' 'Inactive: 3686528 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139604 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546924 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 4551868 kB' 'Mapped: 67340 kB' 'AnonPages: 158192 kB' 'Shmem: 2596 kB' 'KernelStack: 4456 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 259356 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.831 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.831 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # continue 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:27.832 05:56:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:27.832 05:56:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.832 05:56:58 -- setup/common.sh@33 -- # echo 0 00:04:27.832 05:56:58 -- setup/common.sh@33 -- # return 0 00:04:27.832 05:56:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.832 05:56:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.832 05:56:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.832 05:56:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.832 05:56:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:27.832 node0=512 expecting 512 00:04:27.832 05:56:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:27.832 00:04:27.832 real 0m1.033s 00:04:27.832 user 0m0.288s 00:04:27.832 sys 0m0.797s 00:04:27.832 05:56:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.832 05:56:58 -- common/autotest_common.sh@10 -- # set +x 00:04:27.832 ************************************ 00:04:27.832 END TEST custom_alloc 00:04:27.832 ************************************ 00:04:27.832 05:56:58 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:27.832 05:56:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.832 05:56:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.832 05:56:58 -- common/autotest_common.sh@10 -- # set +x 00:04:27.832 ************************************ 00:04:27.832 START TEST no_shrink_alloc 00:04:27.832 ************************************ 00:04:27.832 05:56:58 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:27.832 05:56:58 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:27.832 05:56:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.832 05:56:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:27.832 05:56:58 -- setup/hugepages.sh@51 -- # shift 00:04:27.832 05:56:58 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:27.832 05:56:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:27.832 05:56:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.832 05:56:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.832 05:56:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:27.832 05:56:58 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:27.832 05:56:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.832 05:56:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.832 05:56:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:27.832 05:56:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.832 05:56:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.832 05:56:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:27.832 05:56:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:27.832 05:56:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:27.832 05:56:58 -- setup/hugepages.sh@73 -- # return 0 00:04:27.832 05:56:58 -- setup/hugepages.sh@198 -- # setup output 00:04:27.832 05:56:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.832 05:56:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:28.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:28.401 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.341 05:56:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:29.341 05:56:59 -- setup/hugepages.sh@89 -- # local node 00:04:29.341 05:56:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.341 05:56:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.341 05:56:59 -- setup/hugepages.sh@92 -- # local surp 00:04:29.341 05:56:59 -- setup/hugepages.sh@93 -- # local resv 00:04:29.341 05:56:59 -- setup/hugepages.sh@94 -- # local anon 00:04:29.341 05:56:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.341 05:56:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.341 05:56:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.341 05:56:59 -- setup/common.sh@18 -- # local node= 00:04:29.341 05:56:59 -- setup/common.sh@19 -- # local var val 00:04:29.341 05:56:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.341 05:56:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.341 05:56:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.341 05:56:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.341 05:56:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.341 05:56:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.341 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5078824 kB' 'MemAvailable: 9492104 kB' 'Buffers: 35360 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994336 kB' 'Inactive: 3686376 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139448 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 158044 kB' 'Mapped: 67392 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259188 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64816 kB' 'KernelStack: 4352 kB' 'PageTables: 3320 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 497640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.342 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.342 05:56:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.342 05:56:59 -- setup/common.sh@33 -- # echo 0 00:04:29.342 05:56:59 -- setup/common.sh@33 -- # return 0 00:04:29.342 05:56:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:29.343 05:56:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.343 05:56:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.343 05:56:59 -- setup/common.sh@18 -- # local node= 00:04:29.343 05:56:59 -- setup/common.sh@19 -- # local var val 00:04:29.343 05:56:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.343 05:56:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.343 05:56:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.343 05:56:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.343 05:56:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.343 05:56:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5078572 kB' 'MemAvailable: 9491852 kB' 'Buffers: 35360 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994324 kB' 'Inactive: 3686228 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139300 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 157904 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259204 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64832 kB' 'KernelStack: 4368 kB' 'PageTables: 3352 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 497640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19540 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.343 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.343 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.344 05:56:59 -- setup/common.sh@33 -- # echo 0 00:04:29.344 05:56:59 -- setup/common.sh@33 -- # return 0 00:04:29.344 05:56:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:29.344 05:56:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.344 05:56:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.344 05:56:59 -- setup/common.sh@18 -- # local node= 00:04:29.344 05:56:59 -- setup/common.sh@19 -- # local var val 00:04:29.344 05:56:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.344 05:56:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.344 05:56:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.344 05:56:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.344 05:56:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.344 05:56:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5078572 kB' 'MemAvailable: 9491852 kB' 'Buffers: 35360 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994324 kB' 'Inactive: 3686004 kB' 'Active(anon): 1040 kB' 'Inactive(anon): 139076 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 157948 kB' 'Mapped: 67380 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259204 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64832 kB' 'KernelStack: 4336 kB' 'PageTables: 3272 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 497640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.344 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.344 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.345 05:56:59 -- setup/common.sh@33 -- # echo 0 00:04:29.345 05:56:59 -- setup/common.sh@33 -- # return 0 00:04:29.345 05:56:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:29.345 05:56:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.345 nr_hugepages=1024 00:04:29.345 resv_hugepages=0 00:04:29.345 05:56:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.345 surplus_hugepages=0 00:04:29.345 05:56:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.345 anon_hugepages=0 00:04:29.345 05:56:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.345 05:56:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.345 05:56:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.345 05:56:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.345 05:56:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.345 05:56:59 -- setup/common.sh@18 -- # local node= 00:04:29.345 05:56:59 -- setup/common.sh@19 -- # local var val 00:04:29.345 05:56:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.345 05:56:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.345 05:56:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.345 05:56:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.345 05:56:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.345 05:56:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5078576 kB' 'MemAvailable: 9491856 kB' 'Buffers: 35360 kB' 'Cached: 4516512 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3685764 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 138836 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 157432 kB' 'Mapped: 67340 kB' 'Shmem: 2588 kB' 'KReclaimable: 194372 kB' 'Slab: 259208 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64836 kB' 'KernelStack: 4304 kB' 'PageTables: 3196 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 497640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19556 kB' 'VmallocChunk: 0 kB' 'Percpu: 8112 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.345 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.345 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.346 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.346 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.347 05:56:59 -- setup/common.sh@33 -- # echo 1024 00:04:29.347 05:56:59 -- setup/common.sh@33 -- # return 0 00:04:29.347 05:56:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.347 05:56:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.347 05:56:59 -- setup/hugepages.sh@27 -- # local node 00:04:29.347 05:56:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.347 05:56:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.347 05:56:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.347 05:56:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.347 05:56:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.347 05:56:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.347 05:56:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.347 05:56:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.347 05:56:59 -- setup/common.sh@18 -- # local node=0 00:04:29.347 05:56:59 -- setup/common.sh@19 -- # local var val 00:04:29.347 05:56:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.347 05:56:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.347 05:56:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.347 05:56:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.347 05:56:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.347 05:56:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5078576 kB' 'MemUsed: 7164404 kB' 'SwapCached: 0 kB' 'Active: 994316 kB' 'Inactive: 3685752 kB' 'Active(anon): 1032 kB' 'Inactive(anon): 138824 kB' 'Active(file): 993284 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 4551872 kB' 'Mapped: 67340 kB' 'AnonPages: 157660 kB' 'Shmem: 2588 kB' 'KernelStack: 4356 kB' 'PageTables: 3156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 259208 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.347 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.347 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # continue 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.348 05:56:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.348 05:56:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.348 05:56:59 -- setup/common.sh@33 -- # echo 0 00:04:29.348 05:56:59 -- setup/common.sh@33 -- # return 0 00:04:29.348 05:56:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.348 05:56:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.348 05:56:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.348 05:56:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.348 node0=1024 expecting 1024 00:04:29.348 05:56:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.348 05:56:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.348 05:56:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:29.348 05:56:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:29.348 05:56:59 -- setup/hugepages.sh@202 -- # setup output 00:04:29.348 05:56:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.348 05:56:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:29.869 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:29.869 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:29.869 05:57:00 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:29.869 05:57:00 -- setup/hugepages.sh@89 -- # local node 00:04:29.869 05:57:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.869 05:57:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.869 05:57:00 -- setup/hugepages.sh@92 -- # local surp 00:04:29.869 05:57:00 -- setup/hugepages.sh@93 -- # local resv 00:04:29.869 05:57:00 -- setup/hugepages.sh@94 -- # local anon 00:04:29.869 05:57:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.869 05:57:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.869 05:57:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.869 05:57:00 -- setup/common.sh@18 -- # local node= 00:04:29.869 05:57:00 -- setup/common.sh@19 -- # local var val 00:04:29.869 05:57:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.869 05:57:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.869 05:57:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.869 05:57:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.869 05:57:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.869 05:57:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5077088 kB' 'MemAvailable: 9490376 kB' 'Buffers: 35360 kB' 'Cached: 4516520 kB' 'SwapCached: 0 kB' 'Active: 994344 kB' 'Inactive: 3687176 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140248 kB' 'Active(file): 993292 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 158652 kB' 'Mapped: 67368 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258984 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64612 kB' 'KernelStack: 4464 kB' 'PageTables: 3808 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19628 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.869 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.869 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.870 05:57:00 -- setup/common.sh@33 -- # echo 0 00:04:29.870 05:57:00 -- setup/common.sh@33 -- # return 0 00:04:29.870 05:57:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:29.870 05:57:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.870 05:57:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.870 05:57:00 -- setup/common.sh@18 -- # local node= 00:04:29.870 05:57:00 -- setup/common.sh@19 -- # local var val 00:04:29.870 05:57:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.870 05:57:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.870 05:57:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.870 05:57:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.870 05:57:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.870 05:57:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5077088 kB' 'MemAvailable: 9490376 kB' 'Buffers: 35360 kB' 'Cached: 4516520 kB' 'SwapCached: 0 kB' 'Active: 994344 kB' 'Inactive: 3687144 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140216 kB' 'Active(file): 993292 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 158600 kB' 'Mapped: 67368 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258984 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64612 kB' 'KernelStack: 4448 kB' 'PageTables: 3780 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.870 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.870 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.871 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.871 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.871 05:57:00 -- setup/common.sh@33 -- # echo 0 00:04:29.871 05:57:00 -- setup/common.sh@33 -- # return 0 00:04:29.871 05:57:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:29.871 05:57:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.871 05:57:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.871 05:57:00 -- setup/common.sh@18 -- # local node= 00:04:29.872 05:57:00 -- setup/common.sh@19 -- # local var val 00:04:29.872 05:57:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.872 05:57:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.872 05:57:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.872 05:57:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.872 05:57:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.872 05:57:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5077328 kB' 'MemAvailable: 9490616 kB' 'Buffers: 35360 kB' 'Cached: 4516520 kB' 'SwapCached: 0 kB' 'Active: 994344 kB' 'Inactive: 3687144 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140216 kB' 'Active(file): 993292 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 158792 kB' 'Mapped: 67368 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 258984 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64612 kB' 'KernelStack: 4404 kB' 'PageTables: 3564 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.872 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.872 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.873 05:57:00 -- setup/common.sh@33 -- # echo 0 00:04:29.873 05:57:00 -- setup/common.sh@33 -- # return 0 00:04:29.873 05:57:00 -- setup/hugepages.sh@100 -- # resv=0 00:04:29.873 05:57:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.873 nr_hugepages=1024 00:04:29.873 resv_hugepages=0 00:04:29.873 05:57:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.873 surplus_hugepages=0 00:04:29.873 05:57:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.873 anon_hugepages=0 00:04:29.873 05:57:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.873 05:57:00 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.873 05:57:00 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.873 05:57:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.873 05:57:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.873 05:57:00 -- setup/common.sh@18 -- # local node= 00:04:29.873 05:57:00 -- setup/common.sh@19 -- # local var val 00:04:29.873 05:57:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.873 05:57:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.873 05:57:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.873 05:57:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.873 05:57:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.873 05:57:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5077328 kB' 'MemAvailable: 9490616 kB' 'Buffers: 35360 kB' 'Cached: 4516520 kB' 'SwapCached: 0 kB' 'Active: 994344 kB' 'Inactive: 3686896 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139968 kB' 'Active(file): 993292 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'AnonPages: 158488 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 194372 kB' 'Slab: 259016 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64644 kB' 'KernelStack: 4408 kB' 'PageTables: 3684 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072912 kB' 'Committed_AS: 498432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19644 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 141164 kB' 'DirectMap2M: 4052992 kB' 'DirectMap1G: 10485760 kB' 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.873 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.873 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.874 05:57:00 -- setup/common.sh@33 -- # echo 1024 00:04:29.874 05:57:00 -- setup/common.sh@33 -- # return 0 00:04:29.874 05:57:00 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.874 05:57:00 -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.874 05:57:00 -- setup/hugepages.sh@27 -- # local node 00:04:29.874 05:57:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.874 05:57:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.874 05:57:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.874 05:57:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.874 05:57:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.874 05:57:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.874 05:57:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.874 05:57:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.874 05:57:00 -- setup/common.sh@18 -- # local node=0 00:04:29.874 05:57:00 -- setup/common.sh@19 -- # local var val 00:04:29.874 05:57:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.874 05:57:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.874 05:57:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.874 05:57:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.874 05:57:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.874 05:57:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242980 kB' 'MemFree: 5076828 kB' 'MemUsed: 7166152 kB' 'SwapCached: 0 kB' 'Active: 994344 kB' 'Inactive: 3686756 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 139828 kB' 'Active(file): 993292 kB' 'Inactive(file): 3546928 kB' 'Unevictable: 29168 kB' 'Mlocked: 27632 kB' 'Dirty: 384 kB' 'Writeback: 0 kB' 'FilePages: 4551880 kB' 'Mapped: 67328 kB' 'AnonPages: 158644 kB' 'Shmem: 2596 kB' 'KernelStack: 4424 kB' 'PageTables: 3468 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 194372 kB' 'Slab: 259016 kB' 'SReclaimable: 194372 kB' 'SUnreclaim: 64644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.874 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.874 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # continue 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.875 05:57:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.875 05:57:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.875 05:57:00 -- setup/common.sh@33 -- # echo 0 00:04:29.875 05:57:00 -- setup/common.sh@33 -- # return 0 00:04:29.875 05:57:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.875 05:57:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.875 05:57:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.875 05:57:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.875 05:57:00 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.875 node0=1024 expecting 1024 00:04:29.875 05:57:00 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.875 00:04:29.875 real 0m2.048s 00:04:29.875 user 0m0.611s 00:04:29.875 sys 0m1.549s 00:04:29.875 05:57:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.875 05:57:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.875 ************************************ 00:04:29.875 END TEST no_shrink_alloc 00:04:29.875 ************************************ 00:04:29.875 05:57:00 -- setup/hugepages.sh@217 -- # clear_hp 00:04:29.875 05:57:00 -- setup/hugepages.sh@37 -- # local node hp 00:04:29.875 05:57:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.875 05:57:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.875 05:57:00 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.875 05:57:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.875 05:57:00 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.875 05:57:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.875 05:57:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.875 00:04:29.875 real 0m9.238s 00:04:29.875 user 0m2.503s 00:04:29.875 sys 0m7.070s 00:04:29.875 05:57:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.875 05:57:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.875 ************************************ 00:04:29.875 END TEST hugepages 00:04:29.875 ************************************ 00:04:30.134 05:57:00 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:30.134 05:57:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.134 05:57:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.134 05:57:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.134 ************************************ 00:04:30.134 START TEST driver 00:04:30.134 ************************************ 00:04:30.134 05:57:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:30.134 * Looking for test storage... 00:04:30.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.134 05:57:00 -- setup/driver.sh@68 -- # setup reset 00:04:30.134 05:57:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.134 05:57:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.702 05:57:01 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:30.702 05:57:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.702 05:57:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.702 05:57:01 -- common/autotest_common.sh@10 -- # set +x 00:04:30.702 ************************************ 00:04:30.702 START TEST guess_driver 00:04:30.702 ************************************ 00:04:30.702 05:57:01 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:30.702 05:57:01 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:30.702 05:57:01 -- setup/driver.sh@47 -- # local fail=0 00:04:30.702 05:57:01 -- setup/driver.sh@49 -- # pick_driver 00:04:30.702 05:57:01 -- setup/driver.sh@36 -- # vfio 00:04:30.702 05:57:01 -- setup/driver.sh@21 -- # local iommu_grups 00:04:30.702 05:57:01 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:30.702 05:57:01 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:30.702 05:57:01 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:30.702 05:57:01 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:30.702 05:57:01 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:30.702 05:57:01 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:30.702 05:57:01 -- setup/driver.sh@32 -- # return 1 00:04:30.702 05:57:01 -- setup/driver.sh@38 -- # uio 00:04:30.702 05:57:01 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:04:30.702 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:30.702 05:57:01 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:30.702 05:57:01 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:30.702 Looking for driver=uio_pci_generic 00:04:30.702 05:57:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:30.702 05:57:01 -- setup/driver.sh@45 -- # setup output config 00:04:30.702 05:57:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.702 05:57:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.268 05:57:01 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:31.268 05:57:01 -- setup/driver.sh@58 -- # continue 00:04:31.268 05:57:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.527 05:57:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.527 05:57:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:31.527 05:57:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.427 05:57:03 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.427 05:57:03 -- setup/driver.sh@65 -- # setup reset 00:04:33.427 05:57:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.427 05:57:03 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.685 00:04:33.685 real 0m3.024s 00:04:33.685 user 0m0.542s 00:04:33.685 sys 0m2.487s 00:04:33.685 05:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.685 05:57:04 -- common/autotest_common.sh@10 -- # set +x 00:04:33.685 ************************************ 00:04:33.685 END TEST guess_driver 00:04:33.685 ************************************ 00:04:33.944 00:04:33.944 real 0m3.806s 00:04:33.944 user 0m0.830s 00:04:33.944 sys 0m3.015s 00:04:33.944 05:57:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.944 05:57:04 -- common/autotest_common.sh@10 -- # set +x 00:04:33.944 ************************************ 00:04:33.944 END TEST driver 00:04:33.944 ************************************ 00:04:33.944 05:57:04 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:33.944 05:57:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.944 05:57:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.944 05:57:04 -- common/autotest_common.sh@10 -- # set +x 00:04:33.944 ************************************ 00:04:33.944 START TEST devices 00:04:33.944 ************************************ 00:04:33.944 05:57:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:33.944 * Looking for test storage... 00:04:33.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:33.944 05:57:04 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:33.944 05:57:04 -- setup/devices.sh@192 -- # setup reset 00:04:33.944 05:57:04 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.944 05:57:04 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.511 05:57:05 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:34.511 05:57:05 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:34.511 05:57:05 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:34.511 05:57:05 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:34.511 05:57:05 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:34.511 05:57:05 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:34.511 05:57:05 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:34.511 05:57:05 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.511 05:57:05 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:34.511 05:57:05 -- setup/devices.sh@196 -- # blocks=() 00:04:34.511 05:57:05 -- setup/devices.sh@196 -- # declare -a blocks 00:04:34.511 05:57:05 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:34.511 05:57:05 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:34.511 05:57:05 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:34.511 05:57:05 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:34.511 05:57:05 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:34.511 05:57:05 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:34.511 05:57:05 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:34.511 05:57:05 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:34.511 05:57:05 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:34.511 05:57:05 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:34.511 05:57:05 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:34.511 No valid GPT data, bailing 00:04:34.511 05:57:05 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.511 05:57:05 -- scripts/common.sh@393 -- # pt= 00:04:34.511 05:57:05 -- scripts/common.sh@394 -- # return 1 00:04:34.511 05:57:05 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:34.511 05:57:05 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:34.511 05:57:05 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:34.511 05:57:05 -- setup/common.sh@80 -- # echo 5368709120 00:04:34.511 05:57:05 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:34.511 05:57:05 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:34.511 05:57:05 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:34.511 05:57:05 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:34.511 05:57:05 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:34.511 05:57:05 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:34.511 05:57:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.511 05:57:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.511 05:57:05 -- common/autotest_common.sh@10 -- # set +x 00:04:34.511 ************************************ 00:04:34.511 START TEST nvme_mount 00:04:34.511 ************************************ 00:04:34.512 05:57:05 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:34.512 05:57:05 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:34.512 05:57:05 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:34.512 05:57:05 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.512 05:57:05 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.512 05:57:05 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:34.770 05:57:05 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:34.770 05:57:05 -- setup/common.sh@40 -- # local part_no=1 00:04:34.770 05:57:05 -- setup/common.sh@41 -- # local size=1073741824 00:04:34.770 05:57:05 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:34.770 05:57:05 -- setup/common.sh@44 -- # parts=() 00:04:34.770 05:57:05 -- setup/common.sh@44 -- # local parts 00:04:34.770 05:57:05 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:34.770 05:57:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.770 05:57:05 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:34.770 05:57:05 -- setup/common.sh@46 -- # (( part++ )) 00:04:34.770 05:57:05 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:34.770 05:57:05 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:34.770 05:57:05 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.770 05:57:05 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:35.707 Creating new GPT entries in memory. 00:04:35.707 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:35.707 other utilities. 00:04:35.707 05:57:06 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:35.707 05:57:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.707 05:57:06 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:35.707 05:57:06 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:35.707 05:57:06 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:36.644 Creating new GPT entries in memory. 00:04:36.644 The operation has completed successfully. 00:04:36.644 05:57:07 -- setup/common.sh@57 -- # (( part++ )) 00:04:36.644 05:57:07 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.644 05:57:07 -- setup/common.sh@62 -- # wait 96666 00:04:36.644 05:57:07 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.644 05:57:07 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:36.644 05:57:07 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.644 05:57:07 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:36.644 05:57:07 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:36.644 05:57:07 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.644 05:57:07 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.644 05:57:07 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:36.644 05:57:07 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:36.644 05:57:07 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.644 05:57:07 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:36.644 05:57:07 -- setup/devices.sh@53 -- # local found=0 00:04:36.644 05:57:07 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:36.644 05:57:07 -- setup/devices.sh@56 -- # : 00:04:36.644 05:57:07 -- setup/devices.sh@59 -- # local pci status 00:04:36.644 05:57:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.644 05:57:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:36.644 05:57:07 -- setup/devices.sh@47 -- # setup output config 00:04:36.644 05:57:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.644 05:57:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:36.903 05:57:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:36.903 05:57:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:36.903 05:57:07 -- setup/devices.sh@63 -- # found=1 00:04:36.903 05:57:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.903 05:57:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:36.903 05:57:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.163 05:57:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:37.163 05:57:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.068 05:57:09 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.068 05:57:09 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:39.068 05:57:09 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.068 05:57:09 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.068 05:57:09 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.068 05:57:09 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:39.068 05:57:09 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.068 05:57:09 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.068 05:57:09 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.068 05:57:09 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.068 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.068 05:57:09 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.068 05:57:09 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.068 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:39.068 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:39.068 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.068 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.068 05:57:09 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:39.068 05:57:09 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:39.068 05:57:09 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.068 05:57:09 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:39.068 05:57:09 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.068 05:57:09 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.068 05:57:09 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.068 05:57:09 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:39.068 05:57:09 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.068 05:57:09 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.068 05:57:09 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.068 05:57:09 -- setup/devices.sh@53 -- # local found=0 00:04:39.068 05:57:09 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.068 05:57:09 -- setup/devices.sh@56 -- # : 00:04:39.068 05:57:09 -- setup/devices.sh@59 -- # local pci status 00:04:39.069 05:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.069 05:57:09 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:39.069 05:57:09 -- setup/devices.sh@47 -- # setup output config 00:04:39.069 05:57:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.069 05:57:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.327 05:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:39.327 05:57:09 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:39.328 05:57:09 -- setup/devices.sh@63 -- # found=1 00:04:39.328 05:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.328 05:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:39.328 05:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.328 05:57:09 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:39.328 05:57:09 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.233 05:57:11 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.233 05:57:11 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:41.233 05:57:11 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.233 05:57:11 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.233 05:57:11 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.233 05:57:11 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.233 05:57:11 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:41.233 05:57:11 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:41.233 05:57:11 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:41.233 05:57:11 -- setup/devices.sh@50 -- # local mount_point= 00:04:41.233 05:57:11 -- setup/devices.sh@51 -- # local test_file= 00:04:41.233 05:57:11 -- setup/devices.sh@53 -- # local found=0 00:04:41.233 05:57:11 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.233 05:57:11 -- setup/devices.sh@59 -- # local pci status 00:04:41.233 05:57:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.234 05:57:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:41.234 05:57:11 -- setup/devices.sh@47 -- # setup output config 00:04:41.234 05:57:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.234 05:57:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.492 05:57:11 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:41.492 05:57:11 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:41.492 05:57:11 -- setup/devices.sh@63 -- # found=1 00:04:41.492 05:57:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.492 05:57:11 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:41.492 05:57:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.492 05:57:12 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:41.492 05:57:12 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.397 05:57:13 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.397 05:57:13 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.397 05:57:13 -- setup/devices.sh@68 -- # return 0 00:04:43.397 05:57:13 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:43.397 05:57:13 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.397 05:57:13 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.397 05:57:13 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.397 05:57:13 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.397 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.397 00:04:43.397 real 0m8.733s 00:04:43.397 user 0m0.703s 00:04:43.397 sys 0m6.059s 00:04:43.397 05:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.397 05:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.397 ************************************ 00:04:43.397 END TEST nvme_mount 00:04:43.397 ************************************ 00:04:43.397 05:57:13 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:43.397 05:57:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.397 05:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.397 05:57:13 -- common/autotest_common.sh@10 -- # set +x 00:04:43.397 ************************************ 00:04:43.397 START TEST dm_mount 00:04:43.397 ************************************ 00:04:43.397 05:57:13 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:43.397 05:57:13 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:43.397 05:57:13 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:43.397 05:57:13 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:43.397 05:57:13 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:43.398 05:57:13 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:43.398 05:57:13 -- setup/common.sh@40 -- # local part_no=2 00:04:43.398 05:57:13 -- setup/common.sh@41 -- # local size=1073741824 00:04:43.398 05:57:13 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:43.398 05:57:13 -- setup/common.sh@44 -- # parts=() 00:04:43.398 05:57:13 -- setup/common.sh@44 -- # local parts 00:04:43.398 05:57:13 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:43.398 05:57:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.398 05:57:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.398 05:57:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:43.398 05:57:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.398 05:57:13 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:43.398 05:57:13 -- setup/common.sh@46 -- # (( part++ )) 00:04:43.398 05:57:13 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:43.398 05:57:13 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:43.398 05:57:13 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:43.398 05:57:13 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:44.333 Creating new GPT entries in memory. 00:04:44.333 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:44.333 other utilities. 00:04:44.333 05:57:14 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:44.333 05:57:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.333 05:57:14 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.333 05:57:14 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.333 05:57:14 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:45.710 Creating new GPT entries in memory. 00:04:45.710 The operation has completed successfully. 00:04:45.710 05:57:16 -- setup/common.sh@57 -- # (( part++ )) 00:04:45.710 05:57:16 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.710 05:57:16 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.710 05:57:16 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.710 05:57:16 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:46.646 The operation has completed successfully. 00:04:46.646 05:57:17 -- setup/common.sh@57 -- # (( part++ )) 00:04:46.646 05:57:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.646 05:57:17 -- setup/common.sh@62 -- # wait 97180 00:04:46.646 05:57:17 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:46.646 05:57:17 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.646 05:57:17 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.646 05:57:17 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:46.646 05:57:17 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:46.646 05:57:17 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.646 05:57:17 -- setup/devices.sh@161 -- # break 00:04:46.646 05:57:17 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.647 05:57:17 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:46.647 05:57:17 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:46.647 05:57:17 -- setup/devices.sh@166 -- # dm=dm-0 00:04:46.647 05:57:17 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:46.647 05:57:17 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:46.647 05:57:17 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.647 05:57:17 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:46.647 05:57:17 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.647 05:57:17 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:46.647 05:57:17 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:46.647 05:57:17 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.647 05:57:17 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.647 05:57:17 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:46.647 05:57:17 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:46.647 05:57:17 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.647 05:57:17 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.647 05:57:17 -- setup/devices.sh@53 -- # local found=0 00:04:46.647 05:57:17 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.647 05:57:17 -- setup/devices.sh@56 -- # : 00:04:46.647 05:57:17 -- setup/devices.sh@59 -- # local pci status 00:04:46.647 05:57:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:46.647 05:57:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.647 05:57:17 -- setup/devices.sh@47 -- # setup output config 00:04:46.647 05:57:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.647 05:57:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.906 05:57:17 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:46.906 05:57:17 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.906 05:57:17 -- setup/devices.sh@63 -- # found=1 00:04:46.906 05:57:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.906 05:57:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:46.906 05:57:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.165 05:57:17 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:47.165 05:57:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.068 05:57:19 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.069 05:57:19 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:49.069 05:57:19 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.069 05:57:19 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.069 05:57:19 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:49.069 05:57:19 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.069 05:57:19 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:49.069 05:57:19 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:49.069 05:57:19 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:49.069 05:57:19 -- setup/devices.sh@50 -- # local mount_point= 00:04:49.069 05:57:19 -- setup/devices.sh@51 -- # local test_file= 00:04:49.069 05:57:19 -- setup/devices.sh@53 -- # local found=0 00:04:49.069 05:57:19 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.069 05:57:19 -- setup/devices.sh@59 -- # local pci status 00:04:49.069 05:57:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.069 05:57:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:49.069 05:57:19 -- setup/devices.sh@47 -- # setup output config 00:04:49.069 05:57:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.069 05:57:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.069 05:57:19 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.069 05:57:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:49.069 05:57:19 -- setup/devices.sh@63 -- # found=1 00:04:49.069 05:57:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.069 05:57:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.069 05:57:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.327 05:57:19 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:49.327 05:57:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.269 05:57:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.269 05:57:21 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:51.269 05:57:21 -- setup/devices.sh@68 -- # return 0 00:04:51.269 05:57:21 -- setup/devices.sh@187 -- # cleanup_dm 00:04:51.269 05:57:21 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.269 05:57:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.269 05:57:21 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:51.269 05:57:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:51.269 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.269 05:57:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:51.269 00:04:51.269 real 0m7.668s 00:04:51.269 user 0m0.496s 00:04:51.269 sys 0m4.068s 00:04:51.269 05:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.269 ************************************ 00:04:51.269 05:57:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.269 END TEST dm_mount 00:04:51.269 ************************************ 00:04:51.269 05:57:21 -- setup/devices.sh@1 -- # cleanup 00:04:51.269 05:57:21 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:51.269 05:57:21 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.269 05:57:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:51.269 05:57:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.269 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.269 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.269 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.269 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.269 05:57:21 -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.269 05:57:21 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.269 05:57:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.269 05:57:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.269 05:57:21 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.269 00:04:51.269 real 0m17.306s 00:04:51.269 user 0m1.624s 00:04:51.269 sys 0m10.610s 00:04:51.269 05:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.269 05:57:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.269 ************************************ 00:04:51.269 END TEST devices 00:04:51.269 ************************************ 00:04:51.269 00:04:51.269 real 0m36.867s 00:04:51.269 user 0m6.687s 00:04:51.269 sys 0m25.643s 00:04:51.269 05:57:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.269 05:57:21 -- common/autotest_common.sh@10 -- # set +x 00:04:51.269 ************************************ 00:04:51.269 END TEST setup.sh 00:04:51.269 ************************************ 00:04:51.269 05:57:21 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:51.528 Hugepages 00:04:51.528 node hugesize free / total 00:04:51.528 node0 1048576kB 0 / 0 00:04:51.528 node0 2048kB 2048 / 2048 00:04:51.528 00:04:51.528 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.528 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:51.785 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:51.785 05:57:22 -- spdk/autotest.sh@141 -- # uname -s 00:04:51.785 05:57:22 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:51.785 05:57:22 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:51.786 05:57:22 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:52.352 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:52.352 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:54.255 05:57:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:55.191 05:57:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:55.191 05:57:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:55.191 05:57:25 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.191 05:57:25 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:55.191 05:57:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:55.191 05:57:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:55.191 05:57:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.191 05:57:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.191 05:57:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:55.191 05:57:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:55.191 05:57:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:55.191 05:57:25 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:55.758 Waiting for block devices as requested 00:04:55.758 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.758 05:57:26 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:55.758 05:57:26 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:04:55.758 05:57:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:55.758 05:57:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:55.758 05:57:26 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:55.758 05:57:26 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:55.758 05:57:26 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:04:55.758 05:57:26 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:55.758 05:57:26 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:55.758 05:57:26 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:55.758 05:57:26 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:55.758 05:57:26 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:55.758 05:57:26 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:55.758 05:57:26 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:55.758 05:57:26 -- common/autotest_common.sh@1542 -- # continue 00:04:55.758 05:57:26 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:55.758 05:57:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.758 05:57:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 05:57:26 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:56.017 05:57:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:56.017 05:57:26 -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 05:57:26 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:56.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:56.534 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.470 05:57:27 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:57.470 05:57:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:57.470 05:57:27 -- common/autotest_common.sh@10 -- # set +x 00:04:57.470 05:57:27 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:57.470 05:57:27 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:57.470 05:57:27 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.470 05:57:27 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:57.470 05:57:27 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:57.470 05:57:27 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:57.470 05:57:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.470 05:57:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.470 05:57:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.470 05:57:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:57.470 05:57:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.470 05:57:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:57.470 05:57:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:04:57.470 05:57:28 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:57.470 05:57:28 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:57.470 05:57:28 -- common/autotest_common.sh@1565 -- # device=0x0010 00:04:57.470 05:57:28 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:57.470 05:57:28 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:57.470 05:57:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:57.470 05:57:28 -- common/autotest_common.sh@1578 -- # return 0 00:04:57.470 05:57:28 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:04:57.470 05:57:28 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.470 05:57:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.471 05:57:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.471 05:57:28 -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 ************************************ 00:04:57.471 START TEST unittest 00:04:57.471 ************************************ 00:04:57.471 05:57:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.471 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.471 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.471 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.471 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:57.471 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:57.471 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:57.471 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:57.471 ++ rpc_py=rpc_cmd 00:04:57.471 ++ set -e 00:04:57.471 ++ shopt -s nullglob 00:04:57.471 ++ shopt -s extglob 00:04:57.471 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:57.471 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:57.471 +++ CONFIG_WPDK_DIR= 00:04:57.471 +++ CONFIG_ASAN=y 00:04:57.471 +++ CONFIG_VBDEV_COMPRESS=n 00:04:57.471 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:57.471 +++ CONFIG_USDT=n 00:04:57.471 +++ CONFIG_CUSTOMOCF=n 00:04:57.471 +++ CONFIG_PREFIX=/usr/local 00:04:57.471 +++ CONFIG_RBD=n 00:04:57.471 +++ CONFIG_LIBDIR= 00:04:57.471 +++ CONFIG_IDXD=y 00:04:57.471 +++ CONFIG_NVME_CUSE=y 00:04:57.471 +++ CONFIG_SMA=n 00:04:57.471 +++ CONFIG_VTUNE=n 00:04:57.471 +++ CONFIG_TSAN=n 00:04:57.471 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:57.471 +++ CONFIG_VFIO_USER_DIR= 00:04:57.471 +++ CONFIG_PGO_CAPTURE=n 00:04:57.471 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:57.471 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.471 +++ CONFIG_LTO=n 00:04:57.471 +++ CONFIG_ISCSI_INITIATOR=y 00:04:57.471 +++ CONFIG_CET=n 00:04:57.471 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:57.471 +++ CONFIG_OCF_PATH= 00:04:57.471 +++ CONFIG_RDMA_SET_TOS=y 00:04:57.471 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:57.471 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:57.471 +++ CONFIG_UBLK=n 00:04:57.471 +++ CONFIG_ISAL_CRYPTO=y 00:04:57.471 +++ CONFIG_OPENSSL_PATH= 00:04:57.731 +++ CONFIG_OCF=n 00:04:57.731 +++ CONFIG_FUSE=n 00:04:57.731 +++ CONFIG_VTUNE_DIR= 00:04:57.731 +++ CONFIG_FUZZER_LIB= 00:04:57.731 +++ CONFIG_FUZZER=n 00:04:57.731 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.731 +++ CONFIG_CRYPTO=n 00:04:57.731 +++ CONFIG_PGO_USE=n 00:04:57.731 +++ CONFIG_VHOST=y 00:04:57.731 +++ CONFIG_DAOS=n 00:04:57.731 +++ CONFIG_DPDK_INC_DIR= 00:04:57.731 +++ CONFIG_DAOS_DIR= 00:04:57.731 +++ CONFIG_UNIT_TESTS=y 00:04:57.731 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:57.731 +++ CONFIG_VIRTIO=y 00:04:57.731 +++ CONFIG_COVERAGE=y 00:04:57.731 +++ CONFIG_RDMA=y 00:04:57.731 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:57.731 +++ CONFIG_URING_PATH= 00:04:57.731 +++ CONFIG_XNVME=n 00:04:57.731 +++ CONFIG_VFIO_USER=n 00:04:57.731 +++ CONFIG_ARCH=native 00:04:57.731 +++ CONFIG_URING_ZNS=n 00:04:57.731 +++ CONFIG_WERROR=y 00:04:57.731 +++ CONFIG_HAVE_LIBBSD=n 00:04:57.731 +++ CONFIG_UBSAN=y 00:04:57.731 +++ CONFIG_IPSEC_MB_DIR= 00:04:57.731 +++ CONFIG_GOLANG=n 00:04:57.731 +++ CONFIG_ISAL=y 00:04:57.731 +++ CONFIG_IDXD_KERNEL=n 00:04:57.731 +++ CONFIG_DPDK_LIB_DIR= 00:04:57.731 +++ CONFIG_RDMA_PROV=verbs 00:04:57.731 +++ CONFIG_APPS=y 00:04:57.731 +++ CONFIG_SHARED=n 00:04:57.731 +++ CONFIG_FC_PATH= 00:04:57.731 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:57.731 +++ CONFIG_FC=n 00:04:57.731 +++ CONFIG_AVAHI=n 00:04:57.731 +++ CONFIG_FIO_PLUGIN=y 00:04:57.731 +++ CONFIG_RAID5F=n 00:04:57.731 +++ CONFIG_EXAMPLES=y 00:04:57.731 +++ CONFIG_TESTS=y 00:04:57.731 +++ CONFIG_CRYPTO_MLX5=n 00:04:57.731 +++ CONFIG_MAX_LCORES= 00:04:57.731 +++ CONFIG_IPSEC_MB=n 00:04:57.731 +++ CONFIG_DEBUG=y 00:04:57.731 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:57.731 +++ CONFIG_CROSS_PREFIX= 00:04:57.731 +++ CONFIG_URING=n 00:04:57.731 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:57.731 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:57.731 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:57.731 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:57.731 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:57.731 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.731 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:57.731 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.731 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:57.731 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:57.731 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:57.731 +++ VHOST_APP=("$_app_dir/vhost") 00:04:57.731 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:57.731 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:57.731 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:57.731 +++ [[ #ifndef SPDK_CONFIG_H 00:04:57.731 #define SPDK_CONFIG_H 00:04:57.731 #define SPDK_CONFIG_APPS 1 00:04:57.731 #define SPDK_CONFIG_ARCH native 00:04:57.731 #define SPDK_CONFIG_ASAN 1 00:04:57.731 #undef SPDK_CONFIG_AVAHI 00:04:57.731 #undef SPDK_CONFIG_CET 00:04:57.731 #define SPDK_CONFIG_COVERAGE 1 00:04:57.731 #define SPDK_CONFIG_CROSS_PREFIX 00:04:57.731 #undef SPDK_CONFIG_CRYPTO 00:04:57.731 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:57.731 #undef SPDK_CONFIG_CUSTOMOCF 00:04:57.731 #undef SPDK_CONFIG_DAOS 00:04:57.731 #define SPDK_CONFIG_DAOS_DIR 00:04:57.731 #define SPDK_CONFIG_DEBUG 1 00:04:57.731 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:57.731 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:57.731 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:57.731 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:57.731 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:57.731 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:57.731 #define SPDK_CONFIG_EXAMPLES 1 00:04:57.731 #undef SPDK_CONFIG_FC 00:04:57.731 #define SPDK_CONFIG_FC_PATH 00:04:57.731 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:57.731 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:57.731 #undef SPDK_CONFIG_FUSE 00:04:57.731 #undef SPDK_CONFIG_FUZZER 00:04:57.731 #define SPDK_CONFIG_FUZZER_LIB 00:04:57.731 #undef SPDK_CONFIG_GOLANG 00:04:57.731 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:57.731 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:57.731 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:57.731 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:57.731 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:57.731 #define SPDK_CONFIG_IDXD 1 00:04:57.731 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:57.731 #undef SPDK_CONFIG_IPSEC_MB 00:04:57.731 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:57.731 #define SPDK_CONFIG_ISAL 1 00:04:57.731 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:57.731 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:57.731 #define SPDK_CONFIG_LIBDIR 00:04:57.731 #undef SPDK_CONFIG_LTO 00:04:57.731 #define SPDK_CONFIG_MAX_LCORES 00:04:57.731 #define SPDK_CONFIG_NVME_CUSE 1 00:04:57.731 #undef SPDK_CONFIG_OCF 00:04:57.731 #define SPDK_CONFIG_OCF_PATH 00:04:57.731 #define SPDK_CONFIG_OPENSSL_PATH 00:04:57.731 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:57.731 #undef SPDK_CONFIG_PGO_USE 00:04:57.731 #define SPDK_CONFIG_PREFIX /usr/local 00:04:57.731 #undef SPDK_CONFIG_RAID5F 00:04:57.731 #undef SPDK_CONFIG_RBD 00:04:57.731 #define SPDK_CONFIG_RDMA 1 00:04:57.731 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:57.731 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:57.731 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:57.731 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:57.731 #undef SPDK_CONFIG_SHARED 00:04:57.731 #undef SPDK_CONFIG_SMA 00:04:57.731 #define SPDK_CONFIG_TESTS 1 00:04:57.731 #undef SPDK_CONFIG_TSAN 00:04:57.731 #undef SPDK_CONFIG_UBLK 00:04:57.731 #define SPDK_CONFIG_UBSAN 1 00:04:57.731 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:57.731 #undef SPDK_CONFIG_URING 00:04:57.731 #define SPDK_CONFIG_URING_PATH 00:04:57.731 #undef SPDK_CONFIG_URING_ZNS 00:04:57.731 #undef SPDK_CONFIG_USDT 00:04:57.731 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:57.731 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:57.731 #undef SPDK_CONFIG_VFIO_USER 00:04:57.731 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:57.731 #define SPDK_CONFIG_VHOST 1 00:04:57.731 #define SPDK_CONFIG_VIRTIO 1 00:04:57.731 #undef SPDK_CONFIG_VTUNE 00:04:57.731 #define SPDK_CONFIG_VTUNE_DIR 00:04:57.731 #define SPDK_CONFIG_WERROR 1 00:04:57.731 #define SPDK_CONFIG_WPDK_DIR 00:04:57.731 #undef SPDK_CONFIG_XNVME 00:04:57.731 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:57.731 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:57.731 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.731 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:57.731 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.732 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.732 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.732 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.732 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.732 ++++ export PATH 00:04:57.732 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:57.732 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:57.732 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:57.732 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:57.732 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:57.732 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:57.732 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:57.732 +++ TEST_TAG=N/A 00:04:57.732 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:57.732 ++ : 1 00:04:57.732 ++ export RUN_NIGHTLY 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_RUN_VALGRIND 00:04:57.732 ++ : 1 00:04:57.732 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:57.732 ++ : 1 00:04:57.732 ++ export SPDK_TEST_UNITTEST 00:04:57.732 ++ : 00:04:57.732 ++ export SPDK_TEST_AUTOBUILD 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_RELEASE_BUILD 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_ISAL 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_ISCSI 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:57.732 ++ : 1 00:04:57.732 ++ export SPDK_TEST_NVME 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVME_PMR 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVME_BP 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVME_CLI 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVME_CUSE 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVME_FDP 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVMF 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_VFIOUSER 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_FUZZER 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_FUZZER_SHORT 00:04:57.732 ++ : rdma 00:04:57.732 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_RBD 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_VHOST 00:04:57.732 ++ : 1 00:04:57.732 ++ export SPDK_TEST_BLOCKDEV 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_IOAT 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_BLOBFS 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_VHOST_INIT 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_LVOL 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:57.732 ++ : 1 00:04:57.732 ++ export SPDK_RUN_ASAN 00:04:57.732 ++ : 1 00:04:57.732 ++ export SPDK_RUN_UBSAN 00:04:57.732 ++ : 00:04:57.732 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_RUN_NON_ROOT 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_CRYPTO 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_FTL 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_OCF 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_VMD 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_OPAL 00:04:57.732 ++ : 00:04:57.732 ++ export SPDK_TEST_NATIVE_DPDK 00:04:57.732 ++ : true 00:04:57.732 ++ export SPDK_AUTOTEST_X 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_RAID5 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_URING 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_USDT 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_USE_IGB_UIO 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_SCHEDULER 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_SCANBUILD 00:04:57.732 ++ : 00:04:57.732 ++ export SPDK_TEST_NVMF_NICS 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_SMA 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_DAOS 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_XNVME 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_ACCEL_DSA 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_ACCEL_IAA 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_ACCEL_IOAT 00:04:57.732 ++ : 00:04:57.732 ++ export SPDK_TEST_FUZZER_TARGET 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_TEST_NVMF_MDNS 00:04:57.732 ++ : 0 00:04:57.732 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:57.732 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:57.732 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:57.732 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:57.732 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:57.732 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.732 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.732 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.732 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:57.732 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:57.732 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:57.732 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:57.732 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:57.732 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:57.732 ++ PYTHONDONTWRITEBYTECODE=1 00:04:57.732 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:57.732 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:57.732 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:57.732 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:57.732 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:57.732 ++ rm -rf /var/tmp/asan_suppression_file 00:04:57.732 ++ cat 00:04:57.732 ++ echo leak:libfuse3.so 00:04:57.732 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:57.732 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:57.732 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:57.732 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:57.732 ++ '[' -z /var/spdk/dependencies ']' 00:04:57.732 ++ export DEPENDENCY_DIR 00:04:57.732 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.732 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:57.732 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.732 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:57.732 ++ export QEMU_BIN= 00:04:57.732 ++ QEMU_BIN= 00:04:57.732 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:57.732 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:57.732 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:57.732 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:57.732 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:57.732 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:57.732 ++ '[' 0 -eq 0 ']' 00:04:57.732 ++ export valgrind= 00:04:57.732 ++ valgrind= 00:04:57.732 +++ uname -s 00:04:57.732 ++ '[' Linux = Linux ']' 00:04:57.732 ++ HUGEMEM=4096 00:04:57.732 ++ export CLEAR_HUGE=yes 00:04:57.732 ++ CLEAR_HUGE=yes 00:04:57.732 ++ [[ 0 -eq 1 ]] 00:04:57.732 ++ [[ 0 -eq 1 ]] 00:04:57.732 ++ MAKE=make 00:04:57.732 +++ nproc 00:04:57.732 ++ MAKEFLAGS=-j10 00:04:57.732 ++ export HUGEMEM=4096 00:04:57.732 ++ HUGEMEM=4096 00:04:57.732 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:57.732 ++ NO_HUGE=() 00:04:57.732 ++ TEST_MODE= 00:04:57.732 ++ [[ -z '' ]] 00:04:57.732 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:57.732 ++ exec 00:04:57.732 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:57.732 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:57.732 ++ set_test_storage 2147483648 00:04:57.732 ++ [[ -v testdir ]] 00:04:57.732 ++ local requested_size=2147483648 00:04:57.732 ++ local mount target_dir 00:04:57.732 ++ local -A mounts fss sizes avails uses 00:04:57.732 ++ local source fs size avail mount use 00:04:57.732 ++ local storage_fallback storage_candidates 00:04:57.732 +++ mktemp -udt spdk.XXXXXX 00:04:57.732 ++ storage_fallback=/tmp/spdk.2YgL9y 00:04:57.732 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:57.732 ++ [[ -n '' ]] 00:04:57.732 ++ [[ -n '' ]] 00:04:57.732 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.2YgL9y/tests/unit /tmp/spdk.2YgL9y 00:04:57.732 ++ requested_size=2214592512 00:04:57.732 ++ read -r source fs size use avail _ mount 00:04:57.732 +++ df -T 00:04:57.732 +++ grep -v Filesystem 00:04:57.732 ++ mounts["$mount"]=tmpfs 00:04:57.732 ++ fss["$mount"]=tmpfs 00:04:57.733 ++ avails["$mount"]=1252601856 00:04:57.733 ++ sizes["$mount"]=1253683200 00:04:57.733 ++ uses["$mount"]=1081344 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ mounts["$mount"]=/dev/vda1 00:04:57.733 ++ fss["$mount"]=ext4 00:04:57.733 ++ avails["$mount"]=10483695616 00:04:57.733 ++ sizes["$mount"]=20616794112 00:04:57.733 ++ uses["$mount"]=10116321280 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ mounts["$mount"]=tmpfs 00:04:57.733 ++ fss["$mount"]=tmpfs 00:04:57.733 ++ avails["$mount"]=6268403712 00:04:57.733 ++ sizes["$mount"]=6268403712 00:04:57.733 ++ uses["$mount"]=0 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ mounts["$mount"]=tmpfs 00:04:57.733 ++ fss["$mount"]=tmpfs 00:04:57.733 ++ avails["$mount"]=5242880 00:04:57.733 ++ sizes["$mount"]=5242880 00:04:57.733 ++ uses["$mount"]=0 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ mounts["$mount"]=/dev/vda15 00:04:57.733 ++ fss["$mount"]=vfat 00:04:57.733 ++ avails["$mount"]=103061504 00:04:57.733 ++ sizes["$mount"]=109395968 00:04:57.733 ++ uses["$mount"]=6334464 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ mounts["$mount"]=tmpfs 00:04:57.733 ++ fss["$mount"]=tmpfs 00:04:57.733 ++ avails["$mount"]=1253675008 00:04:57.733 ++ sizes["$mount"]=1253679104 00:04:57.733 ++ uses["$mount"]=4096 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:04:57.733 ++ fss["$mount"]=fuse.sshfs 00:04:57.733 ++ avails["$mount"]=93713981440 00:04:57.733 ++ sizes["$mount"]=105088212992 00:04:57.733 ++ uses["$mount"]=5988798464 00:04:57.733 ++ read -r source fs size use avail _ mount 00:04:57.733 ++ printf '* Looking for test storage...\n' 00:04:57.733 * Looking for test storage... 00:04:57.733 ++ local target_space new_size 00:04:57.733 ++ for target_dir in "${storage_candidates[@]}" 00:04:57.733 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.733 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:57.733 ++ mount=/ 00:04:57.733 ++ target_space=10483695616 00:04:57.733 ++ (( target_space == 0 || target_space < requested_size )) 00:04:57.733 ++ (( target_space >= requested_size )) 00:04:57.733 ++ [[ ext4 == tmpfs ]] 00:04:57.733 ++ [[ ext4 == ramfs ]] 00:04:57.733 ++ [[ / == / ]] 00:04:57.733 ++ new_size=12330913792 00:04:57.733 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:57.733 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.733 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:57.733 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:57.733 ++ return 0 00:04:57.733 ++ set -o errtrace 00:04:57.733 ++ shopt -s extdebug 00:04:57.733 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:57.733 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:57.733 05:57:28 -- common/autotest_common.sh@1672 -- # true 00:04:57.733 05:57:28 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:04:57.733 05:57:28 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:57.733 05:57:28 -- common/autotest_common.sh@29 -- # exec 00:04:57.733 05:57:28 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:57.733 05:57:28 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:57.733 05:57:28 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:57.733 05:57:28 -- common/autotest_common.sh@18 -- # set -x 00:04:57.733 05:57:28 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.733 05:57:28 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:57.733 05:57:28 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:57.733 05:57:28 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:57.733 05:57:28 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:57.733 05:57:28 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:57.733 05:57:28 -- unit/unittest.sh@179 -- # hash lcov 00:04:57.733 05:57:28 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:57.733 05:57:28 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:57.733 05:57:28 -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:57.733 05:57:28 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:57.733 05:57:28 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:57.733 05:57:28 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:57.733 05:57:28 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:57.733 05:57:28 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:57.733 --rc lcov_branch_coverage=1 00:04:57.733 --rc lcov_function_coverage=1 00:04:57.733 --rc genhtml_branch_coverage=1 00:04:57.733 --rc genhtml_function_coverage=1 00:04:57.733 --rc genhtml_legend=1 00:04:57.733 --rc geninfo_all_blocks=1 00:04:57.733 ' 00:04:57.733 05:57:28 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:57.733 --rc lcov_branch_coverage=1 00:04:57.733 --rc lcov_function_coverage=1 00:04:57.733 --rc genhtml_branch_coverage=1 00:04:57.733 --rc genhtml_function_coverage=1 00:04:57.733 --rc genhtml_legend=1 00:04:57.733 --rc geninfo_all_blocks=1 00:04:57.733 ' 00:04:57.733 05:57:28 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:57.733 --rc lcov_branch_coverage=1 00:04:57.733 --rc lcov_function_coverage=1 00:04:57.733 --rc genhtml_branch_coverage=1 00:04:57.733 --rc genhtml_function_coverage=1 00:04:57.733 --rc genhtml_legend=1 00:04:57.733 --rc geninfo_all_blocks=1 00:04:57.733 --no-external' 00:04:57.733 05:57:28 -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:57.733 --rc lcov_branch_coverage=1 00:04:57.733 --rc lcov_function_coverage=1 00:04:57.733 --rc genhtml_branch_coverage=1 00:04:57.733 --rc genhtml_function_coverage=1 00:04:57.733 --rc genhtml_legend=1 00:04:57.733 --rc geninfo_all_blocks=1 00:04:57.733 --no-external' 00:04:57.733 05:57:28 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:15.823 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:15.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:15.823 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:15.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:15.823 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:15.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:42.504 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:42.504 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:42.505 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:42.505 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:42.506 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:42.506 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:43.883 05:58:14 -- unit/unittest.sh@206 -- # uname -m 00:05:43.883 05:58:14 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:43.883 05:58:14 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:43.883 05:58:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.883 05:58:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.883 05:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 ************************************ 00:05:43.883 START TEST unittest_pci_event 00:05:43.883 ************************************ 00:05:43.883 05:58:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:43.883 00:05:43.883 00:05:43.883 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.883 http://cunit.sourceforge.net/ 00:05:43.883 00:05:43.883 00:05:43.883 Suite: pci_event 00:05:43.883 Test: test_pci_parse_event ...[2024-06-11 05:58:14.187391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:43.883 [2024-06-11 05:58:14.188247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:43.883 passed 00:05:43.883 00:05:43.883 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.883 suites 1 1 n/a 0 0 00:05:43.883 tests 1 1 1 0 0 00:05:43.883 asserts 15 15 15 0 n/a 00:05:43.883 00:05:43.883 Elapsed time = 0.001 seconds 00:05:43.883 00:05:43.883 real 0m0.051s 00:05:43.883 user 0m0.035s 00:05:43.883 sys 0m0.012s 00:05:43.883 05:58:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.883 05:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 ************************************ 00:05:43.883 END TEST unittest_pci_event 00:05:43.883 ************************************ 00:05:43.883 05:58:14 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:43.883 05:58:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.883 05:58:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.883 05:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 ************************************ 00:05:43.883 START TEST unittest_include 00:05:43.883 ************************************ 00:05:43.883 05:58:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:43.883 00:05:43.883 00:05:43.883 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.883 http://cunit.sourceforge.net/ 00:05:43.883 00:05:43.883 00:05:43.883 Suite: histogram 00:05:43.883 Test: histogram_test ...passed 00:05:43.883 Test: histogram_merge ...passed 00:05:43.883 00:05:43.883 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.883 suites 1 1 n/a 0 0 00:05:43.883 tests 2 2 2 0 0 00:05:43.883 asserts 50 50 50 0 n/a 00:05:43.883 00:05:43.883 Elapsed time = 0.006 seconds 00:05:43.883 00:05:43.883 real 0m0.049s 00:05:43.883 user 0m0.024s 00:05:43.883 sys 0m0.026s 00:05:43.883 05:58:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.883 05:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 ************************************ 00:05:43.883 END TEST unittest_include 00:05:43.883 ************************************ 00:05:43.883 05:58:14 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:43.883 05:58:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.883 05:58:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.883 05:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:43.883 ************************************ 00:05:43.883 START TEST unittest_bdev 00:05:43.883 ************************************ 00:05:43.883 05:58:14 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:43.883 05:58:14 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:43.883 00:05:43.883 00:05:43.883 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.883 http://cunit.sourceforge.net/ 00:05:43.883 00:05:43.883 00:05:43.883 Suite: bdev 00:05:43.883 Test: bytes_to_blocks_test ...passed 00:05:43.883 Test: num_blocks_test ...passed 00:05:43.883 Test: io_valid_test ...passed 00:05:44.140 Test: open_write_test ...[2024-06-11 05:58:14.544781] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:44.141 [2024-06-11 05:58:14.545333] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:44.141 [2024-06-11 05:58:14.545540] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:44.141 passed 00:05:44.141 Test: claim_test ...passed 00:05:44.141 Test: alias_add_del_test ...[2024-06-11 05:58:14.697023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:44.141 [2024-06-11 05:58:14.697261] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:44.141 [2024-06-11 05:58:14.697360] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:44.141 passed 00:05:44.141 Test: get_device_stat_test ...passed 00:05:44.399 Test: bdev_io_types_test ...passed 00:05:44.399 Test: bdev_io_wait_test ...passed 00:05:44.399 Test: bdev_io_spans_split_test ...passed 00:05:44.399 Test: bdev_io_boundary_split_test ...passed 00:05:44.399 Test: bdev_io_max_size_and_segment_split_test ...[2024-06-11 05:58:14.930081] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:44.399 passed 00:05:44.399 Test: bdev_io_mix_split_test ...passed 00:05:44.658 Test: bdev_io_split_with_io_wait ...passed 00:05:44.658 Test: bdev_io_write_unit_split_test ...[2024-06-11 05:58:15.086413] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:44.658 [2024-06-11 05:58:15.086547] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:44.658 [2024-06-11 05:58:15.086597] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:44.658 [2024-06-11 05:58:15.086645] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:44.658 passed 00:05:44.658 Test: bdev_io_alignment_with_boundary ...passed 00:05:44.658 Test: bdev_io_alignment ...passed 00:05:44.658 Test: bdev_histograms ...passed 00:05:44.916 Test: bdev_write_zeroes ...passed 00:05:44.916 Test: bdev_compare_and_write ...passed 00:05:44.916 Test: bdev_compare ...passed 00:05:45.175 Test: bdev_compare_emulated ...passed 00:05:45.175 Test: bdev_zcopy_write ...passed 00:05:45.175 Test: bdev_zcopy_read ...passed 00:05:45.175 Test: bdev_open_while_hotremove ...passed 00:05:45.175 Test: bdev_close_while_hotremove ...passed 00:05:45.175 Test: bdev_open_ext_test ...[2024-06-11 05:58:15.763023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:45.175 passed 00:05:45.175 Test: bdev_open_ext_unregister ...[2024-06-11 05:58:15.763314] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:45.175 passed 00:05:45.433 Test: bdev_set_io_timeout ...passed 00:05:45.433 Test: bdev_set_qd_sampling ...passed 00:05:45.433 Test: lba_range_overlap ...passed 00:05:45.433 Test: lock_lba_range_check_ranges ...passed 00:05:45.433 Test: lock_lba_range_with_io_outstanding ...passed 00:05:45.433 Test: lock_lba_range_overlapped ...passed 00:05:45.433 Test: bdev_quiesce ...[2024-06-11 05:58:16.076673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:45.691 passed 00:05:45.691 Test: bdev_io_abort ...passed 00:05:45.691 Test: bdev_unmap ...passed 00:05:45.691 Test: bdev_write_zeroes_split_test ...passed 00:05:45.691 Test: bdev_set_options_test ...[2024-06-11 05:58:16.291117] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:45.691 passed 00:05:45.691 Test: bdev_get_memory_domains ...passed 00:05:45.949 Test: bdev_io_ext ...passed 00:05:45.949 Test: bdev_io_ext_no_opts ...passed 00:05:45.949 Test: bdev_io_ext_invalid_opts ...passed 00:05:45.949 Test: bdev_io_ext_split ...passed 00:05:45.949 Test: bdev_io_ext_bounce_buffer ...passed 00:05:45.949 Test: bdev_register_uuid_alias ...[2024-06-11 05:58:16.586656] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name c2da7e34-ded7-4d85-aea2-c512c8a71be8 already exists 00:05:45.950 [2024-06-11 05:58:16.586748] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:c2da7e34-ded7-4d85-aea2-c512c8a71be8 alias for bdev bdev0 00:05:46.209 passed 00:05:46.209 Test: bdev_unregister_by_name ...[2024-06-11 05:58:16.618671] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:46.209 passed 00:05:46.209 Test: for_each_bdev_test ...[2024-06-11 05:58:16.618779] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:46.209 passed 00:05:46.209 Test: bdev_seek_test ...passed 00:05:46.209 Test: bdev_copy ...passed 00:05:46.209 Test: bdev_copy_split_test ...passed 00:05:46.209 Test: examine_locks ...passed 00:05:46.209 Test: claim_v2_rwo ...[2024-06-11 05:58:16.786606] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.786716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.786747] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.786868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.786904] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.786984] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:46.209 passed 00:05:46.209 Test: claim_v2_rom ...[2024-06-11 05:58:16.787193] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787284] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787321] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787363] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:46.209 passed 00:05:46.209 Test: claim_v2_rwm ...[2024-06-11 05:58:16.787443] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:46.209 [2024-06-11 05:58:16.787501] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:46.209 [2024-06-11 05:58:16.787673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:46.209 [2024-06-11 05:58:16.787754] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787794] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787836] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787864] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787911] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.787968] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:46.209 passed 00:05:46.209 Test: claim_v2_existing_writer ...[2024-06-11 05:58:16.788187] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:46.209 passed 00:05:46.209 Test: claim_v2_existing_v1 ...[2024-06-11 05:58:16.788247] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:46.209 [2024-06-11 05:58:16.788415] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.788471] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.788508] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:46.209 passed 00:05:46.209 Test: claim_v1_existing_v2 ...[2024-06-11 05:58:16.788671] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:46.209 passed 00:05:46.209 Test: examine_claimed ...[2024-06-11 05:58:16.788747] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.788818] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:46.209 [2024-06-11 05:58:16.789180] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:46.209 passed 00:05:46.209 00:05:46.209 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.209 suites 1 1 n/a 0 0 00:05:46.209 tests 59 59 59 0 0 00:05:46.209 asserts 4599 4599 4599 0 n/a 00:05:46.209 00:05:46.209 Elapsed time = 2.359 seconds 00:05:46.209 05:58:16 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:46.209 00:05:46.209 00:05:46.209 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.209 http://cunit.sourceforge.net/ 00:05:46.209 00:05:46.209 00:05:46.209 Suite: nvme 00:05:46.209 Test: test_create_ctrlr ...passed 00:05:46.209 Test: test_reset_ctrlr ...[2024-06-11 05:58:16.851677] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.209 passed 00:05:46.209 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:46.209 Test: test_failover_ctrlr ...passed 00:05:46.209 Test: test_race_between_failover_and_add_secondary_trid ...[2024-06-11 05:58:16.854609] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.469 [2024-06-11 05:58:16.854891] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.469 [2024-06-11 05:58:16.855168] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.469 passed 00:05:46.470 Test: test_pending_reset ...[2024-06-11 05:58:16.856974] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.857303] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_attach_ctrlr ...[2024-06-11 05:58:16.858707] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4230:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:46.470 passed 00:05:46.470 Test: test_aer_cb ...passed 00:05:46.470 Test: test_submit_nvme_cmd ...passed 00:05:46.470 Test: test_add_remove_trid ...passed 00:05:46.470 Test: test_abort ...[2024-06-11 05:58:16.862748] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7221:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:46.470 passed 00:05:46.470 Test: test_get_io_qpair ...passed 00:05:46.470 Test: test_bdev_unregister ...passed 00:05:46.470 Test: test_compare_ns ...passed 00:05:46.470 Test: test_init_ana_log_page ...passed 00:05:46.470 Test: test_get_memory_domains ...passed 00:05:46.470 Test: test_reconnect_qpair ...[2024-06-11 05:58:16.865978] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_create_bdev_ctrlr ...[2024-06-11 05:58:16.866535] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5273:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:46.470 passed 00:05:46.470 Test: test_add_multi_ns_to_bdev ...[2024-06-11 05:58:16.868044] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4486:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:46.470 passed 00:05:46.470 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:46.470 Test: test_admin_path ...passed 00:05:46.470 Test: test_reset_bdev_ctrlr ...passed 00:05:46.470 Test: test_find_io_path ...passed 00:05:46.470 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:46.470 Test: test_retry_io_for_io_path_error ...passed 00:05:46.470 Test: test_retry_io_count ...passed 00:05:46.470 Test: test_concurrent_read_ana_log_page ...passed 00:05:46.470 Test: test_retry_io_for_ana_error ...passed 00:05:46.470 Test: test_check_io_error_resiliency_params ...[2024-06-11 05:58:16.875986] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5926:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:46.470 [2024-06-11 05:58:16.876073] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5930:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:46.470 [2024-06-11 05:58:16.876111] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5939:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:46.470 [2024-06-11 05:58:16.876156] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5942:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:46.470 [2024-06-11 05:58:16.876209] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:46.470 [2024-06-11 05:58:16.876263] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5954:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:46.470 [2024-06-11 05:58:16.876298] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5934:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:46.470 passed 00:05:46.470 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-06-11 05:58:16.876370] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5949:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:46.470 [2024-06-11 05:58:16.876423] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5946:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:46.470 passed 00:05:46.470 Test: test_reconnect_ctrlr ...[2024-06-11 05:58:16.877566] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.877731] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.878006] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.878148] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.878281] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_retry_failover_ctrlr ...[2024-06-11 05:58:16.878651] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_fail_path ...[2024-06-11 05:58:16.879183] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.879355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.879522] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.879687] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_nvme_ns_cmp ...passed 00:05:46.470 Test: test_ana_transition ...[2024-06-11 05:58:16.879811] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_set_preferred_path ...passed 00:05:46.470 Test: test_find_next_io_path ...passed 00:05:46.470 Test: test_find_io_path_min_qd ...passed 00:05:46.470 Test: test_disable_auto_failback ...[2024-06-11 05:58:16.881596] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_set_multipath_policy ...passed 00:05:46.470 Test: test_uuid_generation ...passed 00:05:46.470 Test: test_retry_io_to_same_path ...passed 00:05:46.470 Test: test_race_between_reset_and_disconnected ...passed 00:05:46.470 Test: test_ctrlr_op_rpc ...passed 00:05:46.470 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:46.470 Test: test_disable_enable_ctrlr ...[2024-06-11 05:58:16.885610] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 [2024-06-11 05:58:16.885772] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:46.470 passed 00:05:46.470 Test: test_delete_ctrlr_done ...passed 00:05:46.470 Test: test_ns_remove_during_reset ...passed 00:05:46.470 00:05:46.470 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.470 suites 1 1 n/a 0 0 00:05:46.470 tests 48 48 48 0 0 00:05:46.470 asserts 3553 3553 3553 0 n/a 00:05:46.470 00:05:46.470 Elapsed time = 0.037 seconds 00:05:46.470 05:58:16 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:46.470 Test Options 00:05:46.470 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:46.470 00:05:46.470 00:05:46.470 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.470 http://cunit.sourceforge.net/ 00:05:46.470 00:05:46.470 00:05:46.470 Suite: raid 00:05:46.470 Test: test_create_raid ...passed 00:05:46.470 Test: test_create_raid_superblock ...passed 00:05:46.470 Test: test_delete_raid ...passed 00:05:46.470 Test: test_create_raid_invalid_args ...[2024-06-11 05:58:16.936571] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:46.470 [2024-06-11 05:58:16.937183] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:46.470 [2024-06-11 05:58:16.937622] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:46.470 [2024-06-11 05:58:16.937885] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:46.470 [2024-06-11 05:58:16.938590] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:46.470 passed 00:05:46.470 Test: test_delete_raid_invalid_args ...passed 00:05:46.470 Test: test_io_channel ...passed 00:05:46.470 Test: test_reset_io ...passed 00:05:46.470 Test: test_write_io ...passed 00:05:46.470 Test: test_read_io ...passed 00:05:47.918 Test: test_unmap_io ...passed 00:05:47.919 Test: test_io_failure ...[2024-06-11 05:58:18.134700] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:47.919 passed 00:05:47.919 Test: test_multi_raid_no_io ...passed 00:05:47.919 Test: test_multi_raid_with_io ...passed 00:05:47.919 Test: test_io_type_supported ...passed 00:05:47.919 Test: test_raid_json_dump_info ...passed 00:05:47.919 Test: test_context_size ...passed 00:05:47.919 Test: test_raid_level_conversions ...passed 00:05:47.919 Test: test_raid_process ...passed 00:05:47.919 Test: test_raid_io_split ...passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 19 19 19 0 0 00:05:47.919 asserts 177879 177879 177879 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 1.212 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: raid_sb 00:05:47.919 Test: test_raid_bdev_write_superblock ...passed 00:05:47.919 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:47.919 Test: test_raid_bdev_parse_superblock ...[2024-06-11 05:58:18.190905] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:47.919 passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 3 3 3 0 0 00:05:47.919 asserts 32 32 32 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 0.001 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: concat 00:05:47.919 Test: test_concat_start ...passed 00:05:47.919 Test: test_concat_rw ...passed 00:05:47.919 Test: test_concat_null_payload ...passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 3 3 3 0 0 00:05:47.919 asserts 8097 8097 8097 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 0.005 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: raid1 00:05:47.919 Test: test_raid1_start ...passed 00:05:47.919 Test: test_raid1_read_balancing ...passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 2 2 2 0 0 00:05:47.919 asserts 2856 2856 2856 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 0.004 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: zone 00:05:47.919 Test: test_zone_get_operation ...passed 00:05:47.919 Test: test_bdev_zone_get_info ...passed 00:05:47.919 Test: test_bdev_zone_management ...passed 00:05:47.919 Test: test_bdev_zone_append ...passed 00:05:47.919 Test: test_bdev_zone_append_with_md ...passed 00:05:47.919 Test: test_bdev_zone_appendv ...passed 00:05:47.919 Test: test_bdev_zone_appendv_with_md ...passed 00:05:47.919 Test: test_bdev_io_get_append_location ...passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 8 8 8 0 0 00:05:47.919 asserts 94 94 94 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 0.001 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: gpt_parse 00:05:47.919 Test: test_parse_mbr_and_primary ...[2024-06-11 05:58:18.371683] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:47.919 [2024-06-11 05:58:18.372561] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:47.919 [2024-06-11 05:58:18.372635] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:47.919 [2024-06-11 05:58:18.372897] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:47.919 [2024-06-11 05:58:18.373218] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:47.919 [2024-06-11 05:58:18.373507] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:47.919 passed 00:05:47.919 Test: test_parse_secondary ...[2024-06-11 05:58:18.374429] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:47.919 [2024-06-11 05:58:18.374516] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:47.919 [2024-06-11 05:58:18.374989] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:47.919 [2024-06-11 05:58:18.375065] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:47.919 passed 00:05:47.919 Test: test_check_mbr ...[2024-06-11 05:58:18.376120] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:47.919 passed 00:05:47.919 Test: test_read_header ...[2024-06-11 05:58:18.376201] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:47.919 [2024-06-11 05:58:18.376575] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:47.919 [2024-06-11 05:58:18.377052] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:47.919 [2024-06-11 05:58:18.377193] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:47.919 [2024-06-11 05:58:18.377572] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:47.919 [2024-06-11 05:58:18.377636] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:47.919 [2024-06-11 05:58:18.377690] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:47.919 passed 00:05:47.919 Test: test_read_partitions ...[2024-06-11 05:58:18.378203] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:47.919 [2024-06-11 05:58:18.378289] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:47.919 [2024-06-11 05:58:18.378649] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:47.919 [2024-06-11 05:58:18.378715] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:47.919 [2024-06-11 05:58:18.379400] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:47.919 passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 5 5 5 0 0 00:05:47.919 asserts 33 33 33 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 0.009 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: bdev_part 00:05:47.919 Test: part_test ...[2024-06-11 05:58:18.429264] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:47.919 passed 00:05:47.919 Test: part_free_test ...passed 00:05:47.919 Test: part_get_io_channel_test ...passed 00:05:47.919 Test: part_construct_ext ...passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.919 suites 1 1 n/a 0 0 00:05:47.919 tests 4 4 4 0 0 00:05:47.919 asserts 48 48 48 0 n/a 00:05:47.919 00:05:47.919 Elapsed time = 0.053 seconds 00:05:47.919 05:58:18 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:47.919 00:05:47.919 00:05:47.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.919 http://cunit.sourceforge.net/ 00:05:47.919 00:05:47.919 00:05:47.919 Suite: scsi_nvme_suite 00:05:47.919 Test: scsi_nvme_translate_test ...passed 00:05:47.919 00:05:47.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.920 suites 1 1 n/a 0 0 00:05:47.920 tests 1 1 1 0 0 00:05:47.920 asserts 104 104 104 0 n/a 00:05:47.920 00:05:47.920 Elapsed time = 0.000 seconds 00:05:47.920 05:58:18 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:48.178 00:05:48.178 00:05:48.178 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.178 http://cunit.sourceforge.net/ 00:05:48.178 00:05:48.178 00:05:48.178 Suite: lvol 00:05:48.178 Test: ut_lvs_init ...[2024-06-11 05:58:18.572263] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:48.178 [2024-06-11 05:58:18.572718] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:48.178 passed 00:05:48.178 Test: ut_lvol_init ...passed 00:05:48.178 Test: ut_lvol_snapshot ...passed 00:05:48.178 Test: ut_lvol_clone ...passed 00:05:48.178 Test: ut_lvs_destroy ...passed 00:05:48.178 Test: ut_lvs_unload ...passed 00:05:48.178 Test: ut_lvol_resize ...[2024-06-11 05:58:18.574491] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:48.178 passed 00:05:48.178 Test: ut_lvol_set_read_only ...passed 00:05:48.179 Test: ut_lvol_hotremove ...passed 00:05:48.179 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:48.179 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:48.179 Test: ut_lvol_read_write ...passed 00:05:48.179 Test: ut_vbdev_lvol_submit_request ...passed 00:05:48.179 Test: ut_lvol_examine_config ...passed 00:05:48.179 Test: ut_lvol_examine_disk ...[2024-06-11 05:58:18.575349] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:48.179 passed 00:05:48.179 Test: ut_lvol_rename ...[2024-06-11 05:58:18.576439] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:48.179 [2024-06-11 05:58:18.576614] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:48.179 passed 00:05:48.179 Test: ut_bdev_finish ...passed 00:05:48.179 Test: ut_lvs_rename ...passed 00:05:48.179 Test: ut_lvol_seek ...passed 00:05:48.179 Test: ut_esnap_dev_create ...[2024-06-11 05:58:18.577436] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:48.179 [2024-06-11 05:58:18.577542] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:48.179 [2024-06-11 05:58:18.577593] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:48.179 passed 00:05:48.179 Test: ut_lvol_esnap_clone_bad_args ...[2024-06-11 05:58:18.577669] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:48.179 [2024-06-11 05:58:18.577875] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:48.179 [2024-06-11 05:58:18.577929] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:48.179 passed 00:05:48.179 00:05:48.179 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.179 suites 1 1 n/a 0 0 00:05:48.179 tests 21 21 21 0 0 00:05:48.179 asserts 712 712 712 0 n/a 00:05:48.179 00:05:48.179 Elapsed time = 0.006 seconds 00:05:48.179 05:58:18 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:48.179 00:05:48.179 00:05:48.179 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.179 http://cunit.sourceforge.net/ 00:05:48.179 00:05:48.179 00:05:48.179 Suite: zone_block 00:05:48.179 Test: test_zone_block_create ...passed 00:05:48.179 Test: test_zone_block_create_invalid ...[2024-06-11 05:58:18.648248] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:48.179 [2024-06-11 05:58:18.648675] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-11 05:58:18.648935] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:48.179 [2024-06-11 05:58:18.649030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-06-11 05:58:18.649270] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:48.179 [2024-06-11 05:58:18.649330] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-06-11 05:58:18.649442] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:48.179 [2024-06-11 05:58:18.649511] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:48.179 Test: test_get_zone_info ...[2024-06-11 05:58:18.650220] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.650304] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.650395] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_supported_io_types ...passed 00:05:48.179 Test: test_reset_zone ...[2024-06-11 05:58:18.651550] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.651632] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_open_zone ...[2024-06-11 05:58:18.652251] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.653096] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.653187] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_zone_write ...[2024-06-11 05:58:18.653829] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:48.179 [2024-06-11 05:58:18.653919] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.653999] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:48.179 [2024-06-11 05:58:18.654080] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.662128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:48.179 [2024-06-11 05:58:18.662194] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.662299] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:48.179 [2024-06-11 05:58:18.662343] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.670127] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:48.179 [2024-06-11 05:58:18.670218] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_zone_read ...[2024-06-11 05:58:18.670835] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:48.179 [2024-06-11 05:58:18.670888] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.671009] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:48.179 [2024-06-11 05:58:18.671045] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.671628] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:48.179 [2024-06-11 05:58:18.671680] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_close_zone ...[2024-06-11 05:58:18.672181] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.672300] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.672582] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.672653] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_finish_zone ...[2024-06-11 05:58:18.673468] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.673548] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 Test: test_append_zone ...[2024-06-11 05:58:18.673988] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:48.179 [2024-06-11 05:58:18.674045] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.674100] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:48.179 [2024-06-11 05:58:18.674126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 [2024-06-11 05:58:18.689734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:48.179 [2024-06-11 05:58:18.689806] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:48.179 passed 00:05:48.179 00:05:48.179 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.179 suites 1 1 n/a 0 0 00:05:48.179 tests 11 11 11 0 0 00:05:48.179 asserts 3437 3437 3437 0 n/a 00:05:48.179 00:05:48.179 Elapsed time = 0.043 seconds 00:05:48.179 05:58:18 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:48.179 00:05:48.179 00:05:48.179 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.179 http://cunit.sourceforge.net/ 00:05:48.179 00:05:48.179 00:05:48.179 Suite: bdev 00:05:48.179 Test: basic ...[2024-06-11 05:58:18.800051] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x56505c0e4401): Operation not permitted (rc=-1) 00:05:48.179 [2024-06-11 05:58:18.800361] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x56505c0e43c0): Operation not permitted (rc=-1) 00:05:48.180 [2024-06-11 05:58:18.800401] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x56505c0e4401): Operation not permitted (rc=-1) 00:05:48.439 passed 00:05:48.439 Test: unregister_and_close ...passed 00:05:48.439 Test: unregister_and_close_different_threads ...passed 00:05:48.439 Test: basic_qos ...passed 00:05:48.439 Test: put_channel_during_reset ...passed 00:05:48.698 Test: aborted_reset ...passed 00:05:48.698 Test: aborted_reset_no_outstanding_io ...passed 00:05:48.698 Test: io_during_reset ...passed 00:05:48.698 Test: reset_completions ...passed 00:05:48.698 Test: io_during_qos_queue ...passed 00:05:48.958 Test: io_during_qos_reset ...passed 00:05:48.958 Test: enomem ...passed 00:05:48.958 Test: enomem_multi_bdev ...passed 00:05:48.958 Test: enomem_multi_bdev_unregister ...passed 00:05:49.217 Test: enomem_multi_io_target ...passed 00:05:49.217 Test: qos_dynamic_enable ...passed 00:05:49.217 Test: bdev_histograms_mt ...passed 00:05:49.217 Test: bdev_set_io_timeout_mt ...[2024-06-11 05:58:19.801335] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:49.217 passed 00:05:49.217 Test: lock_lba_range_then_submit_io ...[2024-06-11 05:58:19.825855] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x56505c0e4380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:49.217 passed 00:05:49.476 Test: unregister_during_reset ...passed 00:05:49.476 Test: event_notify_and_close ...passed 00:05:49.476 Test: unregister_and_qos_poller ...passed 00:05:49.476 Suite: bdev_wrong_thread 00:05:49.476 Test: spdk_bdev_register_wt ...[2024-06-11 05:58:20.028972] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:49.476 passed 00:05:49.476 Test: spdk_bdev_examine_wt ...[2024-06-11 05:58:20.029352] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:49.476 passed 00:05:49.476 00:05:49.476 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.476 suites 2 2 n/a 0 0 00:05:49.476 tests 24 24 24 0 0 00:05:49.476 asserts 621 621 621 0 n/a 00:05:49.476 00:05:49.476 Elapsed time = 1.259 seconds 00:05:49.476 00:05:49.476 real 0m5.661s 00:05:49.476 user 0m2.357s 00:05:49.476 sys 0m3.307s 00:05:49.476 05:58:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.476 ************************************ 00:05:49.476 END TEST unittest_bdev 00:05:49.476 ************************************ 00:05:49.476 05:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.476 05:58:20 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:49.476 05:58:20 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:49.736 05:58:20 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:49.736 05:58:20 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:49.736 05:58:20 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:05:49.736 05:58:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.736 05:58:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.736 05:58:20 -- common/autotest_common.sh@10 -- # set +x 00:05:49.736 ************************************ 00:05:49.736 START TEST unittest_blob_blobfs 00:05:49.736 ************************************ 00:05:49.736 05:58:20 -- common/autotest_common.sh@1104 -- # unittest_blob 00:05:49.736 05:58:20 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:49.736 05:58:20 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:49.736 00:05:49.736 00:05:49.736 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.736 http://cunit.sourceforge.net/ 00:05:49.736 00:05:49.736 00:05:49.736 Suite: blob_nocopy_noextent 00:05:49.736 Test: blob_init ...[2024-06-11 05:58:20.181422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:49.736 passed 00:05:49.736 Test: blob_thin_provision ...passed 00:05:49.736 Test: blob_read_only ...passed 00:05:49.736 Test: bs_load ...[2024-06-11 05:58:20.306164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:49.736 passed 00:05:49.736 Test: bs_load_custom_cluster_size ...passed 00:05:49.736 Test: bs_load_after_failed_grow ...passed 00:05:49.736 Test: bs_cluster_sz ...[2024-06-11 05:58:20.351850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:49.736 [2024-06-11 05:58:20.352319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:49.736 [2024-06-11 05:58:20.352574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:49.736 passed 00:05:49.996 Test: bs_resize_md ...passed 00:05:49.996 Test: bs_destroy ...passed 00:05:49.996 Test: bs_type ...passed 00:05:49.996 Test: bs_super_block ...passed 00:05:49.996 Test: bs_test_recover_cluster_count ...passed 00:05:49.996 Test: bs_grow_live ...passed 00:05:49.996 Test: bs_grow_live_no_space ...passed 00:05:49.996 Test: bs_test_grow ...passed 00:05:49.996 Test: blob_serialize_test ...passed 00:05:49.996 Test: super_block_crc ...passed 00:05:49.996 Test: blob_thin_prov_write_count_io ...passed 00:05:49.996 Test: bs_load_iter_test ...passed 00:05:49.996 Test: blob_relations ...[2024-06-11 05:58:20.612057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.996 [2024-06-11 05:58:20.612194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.996 [2024-06-11 05:58:20.613317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.996 [2024-06-11 05:58:20.613410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.996 passed 00:05:49.996 Test: blob_relations2 ...[2024-06-11 05:58:20.636586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.996 [2024-06-11 05:58:20.636675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.996 [2024-06-11 05:58:20.636724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.996 [2024-06-11 05:58:20.636744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.996 [2024-06-11 05:58:20.638304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.996 [2024-06-11 05:58:20.638368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.996 [2024-06-11 05:58:20.638904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.996 [2024-06-11 05:58:20.638958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:50.256 passed 00:05:50.256 Test: blob_relations3 ...passed 00:05:50.256 Test: blobstore_clean_power_failure ...passed 00:05:50.515 Test: blob_delete_snapshot_power_failure ...[2024-06-11 05:58:20.908241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:50.515 [2024-06-11 05:58:20.929075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:50.515 [2024-06-11 05:58:20.929193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:50.515 [2024-06-11 05:58:20.929254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:50.515 [2024-06-11 05:58:20.949389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:50.515 [2024-06-11 05:58:20.949521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:50.515 [2024-06-11 05:58:20.949592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:50.515 [2024-06-11 05:58:20.949648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:50.515 [2024-06-11 05:58:20.970129] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:50.515 [2024-06-11 05:58:20.970330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:50.515 [2024-06-11 05:58:20.990679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:50.515 [2024-06-11 05:58:20.990835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:50.515 [2024-06-11 05:58:21.011734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:50.515 [2024-06-11 05:58:21.011870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:50.515 passed 00:05:50.515 Test: blob_create_snapshot_power_failure ...[2024-06-11 05:58:21.073409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:50.515 [2024-06-11 05:58:21.113850] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:50.515 [2024-06-11 05:58:21.134594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:50.774 passed 00:05:50.774 Test: blob_io_unit ...passed 00:05:50.774 Test: blob_io_unit_compatibility ...passed 00:05:50.774 Test: blob_ext_md_pages ...passed 00:05:50.774 Test: blob_esnap_io_4096_4096 ...passed 00:05:50.774 Test: blob_esnap_io_512_512 ...passed 00:05:50.774 Test: blob_esnap_io_4096_512 ...passed 00:05:51.033 Test: blob_esnap_io_512_4096 ...passed 00:05:51.033 Suite: blob_bs_nocopy_noextent 00:05:51.033 Test: blob_open ...passed 00:05:51.033 Test: blob_create ...[2024-06-11 05:58:21.521888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:51.033 passed 00:05:51.033 Test: blob_create_loop ...passed 00:05:51.033 Test: blob_create_fail ...[2024-06-11 05:58:21.661233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:51.300 passed 00:05:51.300 Test: blob_create_internal ...passed 00:05:51.300 Test: blob_create_zero_extent ...passed 00:05:51.300 Test: blob_snapshot ...passed 00:05:51.300 Test: blob_clone ...passed 00:05:51.567 Test: blob_inflate ...[2024-06-11 05:58:21.971208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:51.567 passed 00:05:51.567 Test: blob_delete ...passed 00:05:51.567 Test: blob_resize_test ...[2024-06-11 05:58:22.082936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:51.567 passed 00:05:51.567 Test: channel_ops ...passed 00:05:51.830 Test: blob_super ...passed 00:05:51.830 Test: blob_rw_verify_iov ...passed 00:05:51.830 Test: blob_unmap ...passed 00:05:51.830 Test: blob_iter ...passed 00:05:51.830 Test: blob_parse_md ...passed 00:05:52.089 Test: bs_load_pending_removal ...passed 00:05:52.089 Test: bs_unload ...[2024-06-11 05:58:22.535336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:52.089 passed 00:05:52.089 Test: bs_usable_clusters ...passed 00:05:52.089 Test: blob_crc ...[2024-06-11 05:58:22.652720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:52.089 [2024-06-11 05:58:22.652911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:52.089 passed 00:05:52.089 Test: blob_flags ...passed 00:05:52.348 Test: bs_version ...passed 00:05:52.348 Test: blob_set_xattrs_test ...[2024-06-11 05:58:22.829361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:52.348 [2024-06-11 05:58:22.829495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:52.348 passed 00:05:52.607 Test: blob_thin_prov_alloc ...passed 00:05:52.607 Test: blob_insert_cluster_msg_test ...passed 00:05:52.607 Test: blob_thin_prov_rw ...passed 00:05:52.607 Test: blob_thin_prov_rle ...passed 00:05:52.607 Test: blob_thin_prov_rw_iov ...passed 00:05:52.866 Test: blob_snapshot_rw ...passed 00:05:52.866 Test: blob_snapshot_rw_iov ...passed 00:05:53.125 Test: blob_inflate_rw ...passed 00:05:53.125 Test: blob_snapshot_freeze_io ...passed 00:05:53.383 Test: blob_operation_split_rw ...passed 00:05:53.642 Test: blob_operation_split_rw_iov ...passed 00:05:53.642 Test: blob_simultaneous_operations ...[2024-06-11 05:58:24.089648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:53.642 [2024-06-11 05:58:24.089775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.642 [2024-06-11 05:58:24.091282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:53.642 [2024-06-11 05:58:24.091344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.642 [2024-06-11 05:58:24.107520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:53.642 [2024-06-11 05:58:24.107589] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.642 [2024-06-11 05:58:24.107723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:53.642 [2024-06-11 05:58:24.107757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.642 passed 00:05:53.642 Test: blob_persist_test ...passed 00:05:53.901 Test: blob_decouple_snapshot ...passed 00:05:53.901 Test: blob_seek_io_unit ...passed 00:05:53.901 Test: blob_nested_freezes ...passed 00:05:53.901 Suite: blob_blob_nocopy_noextent 00:05:53.901 Test: blob_write ...passed 00:05:53.901 Test: blob_read ...passed 00:05:54.159 Test: blob_rw_verify ...passed 00:05:54.159 Test: blob_rw_verify_iov_nomem ...passed 00:05:54.159 Test: blob_rw_iov_read_only ...passed 00:05:54.159 Test: blob_xattr ...passed 00:05:54.442 Test: blob_dirty_shutdown ...passed 00:05:54.442 Test: blob_is_degraded ...passed 00:05:54.442 Suite: blob_esnap_bs_nocopy_noextent 00:05:54.442 Test: blob_esnap_create ...passed 00:05:54.442 Test: blob_esnap_thread_add_remove ...passed 00:05:54.442 Test: blob_esnap_clone_snapshot ...passed 00:05:54.701 Test: blob_esnap_clone_inflate ...passed 00:05:54.701 Test: blob_esnap_clone_decouple ...passed 00:05:54.701 Test: blob_esnap_clone_reload ...passed 00:05:54.701 Test: blob_esnap_hotplug ...passed 00:05:54.701 Suite: blob_nocopy_extent 00:05:54.701 Test: blob_init ...[2024-06-11 05:58:25.312246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:54.701 passed 00:05:54.959 Test: blob_thin_provision ...passed 00:05:54.959 Test: blob_read_only ...passed 00:05:54.959 Test: bs_load ...[2024-06-11 05:58:25.391373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:54.959 passed 00:05:54.959 Test: bs_load_custom_cluster_size ...passed 00:05:54.959 Test: bs_load_after_failed_grow ...passed 00:05:54.959 Test: bs_cluster_sz ...[2024-06-11 05:58:25.433889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:54.959 [2024-06-11 05:58:25.434219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:54.959 [2024-06-11 05:58:25.434280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:54.959 passed 00:05:54.959 Test: bs_resize_md ...passed 00:05:54.959 Test: bs_destroy ...passed 00:05:54.959 Test: bs_type ...passed 00:05:54.959 Test: bs_super_block ...passed 00:05:54.959 Test: bs_test_recover_cluster_count ...passed 00:05:54.959 Test: bs_grow_live ...passed 00:05:54.959 Test: bs_grow_live_no_space ...passed 00:05:54.959 Test: bs_test_grow ...passed 00:05:54.959 Test: blob_serialize_test ...passed 00:05:55.217 Test: super_block_crc ...passed 00:05:55.217 Test: blob_thin_prov_write_count_io ...passed 00:05:55.217 Test: bs_load_iter_test ...passed 00:05:55.217 Test: blob_relations ...[2024-06-11 05:58:25.684086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.217 [2024-06-11 05:58:25.684231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.217 [2024-06-11 05:58:25.685293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.217 [2024-06-11 05:58:25.685369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.217 passed 00:05:55.217 Test: blob_relations2 ...[2024-06-11 05:58:25.709083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.217 [2024-06-11 05:58:25.709241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.217 [2024-06-11 05:58:25.709278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.217 [2024-06-11 05:58:25.709325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.217 [2024-06-11 05:58:25.710941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.218 [2024-06-11 05:58:25.711012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.218 [2024-06-11 05:58:25.711480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:55.218 [2024-06-11 05:58:25.711537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.218 passed 00:05:55.218 Test: blob_relations3 ...passed 00:05:55.476 Test: blobstore_clean_power_failure ...passed 00:05:55.476 Test: blob_delete_snapshot_power_failure ...[2024-06-11 05:58:25.987766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:55.476 [2024-06-11 05:58:26.008874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:55.476 [2024-06-11 05:58:26.030728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:55.476 [2024-06-11 05:58:26.030842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:55.476 [2024-06-11 05:58:26.030881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.476 [2024-06-11 05:58:26.052423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:55.476 [2024-06-11 05:58:26.052566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:55.476 [2024-06-11 05:58:26.052607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:55.476 [2024-06-11 05:58:26.052642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.476 [2024-06-11 05:58:26.073997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:55.476 [2024-06-11 05:58:26.074108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:55.476 [2024-06-11 05:58:26.074150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:55.476 [2024-06-11 05:58:26.074214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.476 [2024-06-11 05:58:26.095601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:55.476 [2024-06-11 05:58:26.095775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.476 [2024-06-11 05:58:26.117055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:55.476 [2024-06-11 05:58:26.117235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.736 [2024-06-11 05:58:26.138408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:55.736 [2024-06-11 05:58:26.138541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.736 passed 00:05:55.736 Test: blob_create_snapshot_power_failure ...[2024-06-11 05:58:26.201319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:55.736 [2024-06-11 05:58:26.222192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:55.736 [2024-06-11 05:58:26.262837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:55.736 [2024-06-11 05:58:26.283939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:55.736 passed 00:05:55.736 Test: blob_io_unit ...passed 00:05:55.995 Test: blob_io_unit_compatibility ...passed 00:05:55.995 Test: blob_ext_md_pages ...passed 00:05:55.995 Test: blob_esnap_io_4096_4096 ...passed 00:05:55.995 Test: blob_esnap_io_512_512 ...passed 00:05:55.995 Test: blob_esnap_io_4096_512 ...passed 00:05:55.995 Test: blob_esnap_io_512_4096 ...passed 00:05:55.995 Suite: blob_bs_nocopy_extent 00:05:56.253 Test: blob_open ...passed 00:05:56.253 Test: blob_create ...[2024-06-11 05:58:26.680730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:56.253 passed 00:05:56.253 Test: blob_create_loop ...passed 00:05:56.253 Test: blob_create_fail ...[2024-06-11 05:58:26.831125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:56.253 passed 00:05:56.511 Test: blob_create_internal ...passed 00:05:56.511 Test: blob_create_zero_extent ...passed 00:05:56.511 Test: blob_snapshot ...passed 00:05:56.511 Test: blob_clone ...passed 00:05:56.511 Test: blob_inflate ...[2024-06-11 05:58:27.151036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:56.769 passed 00:05:56.769 Test: blob_delete ...passed 00:05:56.769 Test: blob_resize_test ...[2024-06-11 05:58:27.266617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:56.769 passed 00:05:56.769 Test: channel_ops ...passed 00:05:56.769 Test: blob_super ...passed 00:05:57.028 Test: blob_rw_verify_iov ...passed 00:05:57.028 Test: blob_unmap ...passed 00:05:57.028 Test: blob_iter ...passed 00:05:57.028 Test: blob_parse_md ...passed 00:05:57.287 Test: bs_load_pending_removal ...passed 00:05:57.287 Test: bs_unload ...[2024-06-11 05:58:27.724000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:57.287 passed 00:05:57.287 Test: bs_usable_clusters ...passed 00:05:57.287 Test: blob_crc ...[2024-06-11 05:58:27.839563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:57.287 [2024-06-11 05:58:27.839695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:57.287 passed 00:05:57.287 Test: blob_flags ...passed 00:05:57.544 Test: bs_version ...passed 00:05:57.544 Test: blob_set_xattrs_test ...[2024-06-11 05:58:28.016160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:57.545 [2024-06-11 05:58:28.016290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:57.545 passed 00:05:57.545 Test: blob_thin_prov_alloc ...passed 00:05:57.803 Test: blob_insert_cluster_msg_test ...passed 00:05:57.803 Test: blob_thin_prov_rw ...passed 00:05:57.803 Test: blob_thin_prov_rle ...passed 00:05:57.803 Test: blob_thin_prov_rw_iov ...passed 00:05:58.061 Test: blob_snapshot_rw ...passed 00:05:58.061 Test: blob_snapshot_rw_iov ...passed 00:05:58.320 Test: blob_inflate_rw ...passed 00:05:58.320 Test: blob_snapshot_freeze_io ...passed 00:05:58.578 Test: blob_operation_split_rw ...passed 00:05:58.578 Test: blob_operation_split_rw_iov ...passed 00:05:58.835 Test: blob_simultaneous_operations ...[2024-06-11 05:58:29.257465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.835 [2024-06-11 05:58:29.257552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.835 [2024-06-11 05:58:29.259076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.835 [2024-06-11 05:58:29.259128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.835 [2024-06-11 05:58:29.275022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.835 [2024-06-11 05:58:29.275111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.835 [2024-06-11 05:58:29.275253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.835 [2024-06-11 05:58:29.275282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.835 passed 00:05:58.835 Test: blob_persist_test ...passed 00:05:58.835 Test: blob_decouple_snapshot ...passed 00:05:59.094 Test: blob_seek_io_unit ...passed 00:05:59.094 Test: blob_nested_freezes ...passed 00:05:59.094 Suite: blob_blob_nocopy_extent 00:05:59.094 Test: blob_write ...passed 00:05:59.094 Test: blob_read ...passed 00:05:59.351 Test: blob_rw_verify ...passed 00:05:59.351 Test: blob_rw_verify_iov_nomem ...passed 00:05:59.351 Test: blob_rw_iov_read_only ...passed 00:05:59.351 Test: blob_xattr ...passed 00:05:59.607 Test: blob_dirty_shutdown ...passed 00:05:59.607 Test: blob_is_degraded ...passed 00:05:59.607 Suite: blob_esnap_bs_nocopy_extent 00:05:59.607 Test: blob_esnap_create ...passed 00:05:59.607 Test: blob_esnap_thread_add_remove ...passed 00:05:59.607 Test: blob_esnap_clone_snapshot ...passed 00:05:59.866 Test: blob_esnap_clone_inflate ...passed 00:05:59.866 Test: blob_esnap_clone_decouple ...passed 00:05:59.866 Test: blob_esnap_clone_reload ...passed 00:05:59.866 Test: blob_esnap_hotplug ...passed 00:05:59.866 Suite: blob_copy_noextent 00:05:59.866 Test: blob_init ...[2024-06-11 05:58:30.466452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:59.866 passed 00:05:59.866 Test: blob_thin_provision ...passed 00:06:00.124 Test: blob_read_only ...passed 00:06:00.124 Test: bs_load ...[2024-06-11 05:58:30.544112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:00.124 passed 00:06:00.124 Test: bs_load_custom_cluster_size ...passed 00:06:00.124 Test: bs_load_after_failed_grow ...passed 00:06:00.124 Test: bs_cluster_sz ...[2024-06-11 05:58:30.584009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:00.124 [2024-06-11 05:58:30.584229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:00.124 [2024-06-11 05:58:30.584273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:00.124 passed 00:06:00.124 Test: bs_resize_md ...passed 00:06:00.124 Test: bs_destroy ...passed 00:06:00.124 Test: bs_type ...passed 00:06:00.124 Test: bs_super_block ...passed 00:06:00.124 Test: bs_test_recover_cluster_count ...passed 00:06:00.124 Test: bs_grow_live ...passed 00:06:00.124 Test: bs_grow_live_no_space ...passed 00:06:00.124 Test: bs_test_grow ...passed 00:06:00.124 Test: blob_serialize_test ...passed 00:06:00.124 Test: super_block_crc ...passed 00:06:00.384 Test: blob_thin_prov_write_count_io ...passed 00:06:00.384 Test: bs_load_iter_test ...passed 00:06:00.384 Test: blob_relations ...[2024-06-11 05:58:30.826893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:00.384 [2024-06-11 05:58:30.827010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.384 [2024-06-11 05:58:30.827582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:00.384 [2024-06-11 05:58:30.827618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.384 passed 00:06:00.384 Test: blob_relations2 ...[2024-06-11 05:58:30.848513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:00.384 [2024-06-11 05:58:30.848603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.384 [2024-06-11 05:58:30.848629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:00.384 [2024-06-11 05:58:30.848643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.384 [2024-06-11 05:58:30.849545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:00.384 [2024-06-11 05:58:30.849598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.384 [2024-06-11 05:58:30.849869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:00.384 [2024-06-11 05:58:30.849906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.384 passed 00:06:00.384 Test: blob_relations3 ...passed 00:06:00.643 Test: blobstore_clean_power_failure ...passed 00:06:00.643 Test: blob_delete_snapshot_power_failure ...[2024-06-11 05:58:31.113113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:00.643 [2024-06-11 05:58:31.133105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:00.643 [2024-06-11 05:58:31.133228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:00.643 [2024-06-11 05:58:31.133270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.643 [2024-06-11 05:58:31.153330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:00.643 [2024-06-11 05:58:31.153425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:00.643 [2024-06-11 05:58:31.153464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:00.643 [2024-06-11 05:58:31.153491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.643 [2024-06-11 05:58:31.173188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:00.643 [2024-06-11 05:58:31.173319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.643 [2024-06-11 05:58:31.193046] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:00.643 [2024-06-11 05:58:31.193173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.643 [2024-06-11 05:58:31.212941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:00.643 [2024-06-11 05:58:31.213063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:00.643 passed 00:06:00.643 Test: blob_create_snapshot_power_failure ...[2024-06-11 05:58:31.271945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:00.902 [2024-06-11 05:58:31.311010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:00.902 [2024-06-11 05:58:31.331024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:00.902 passed 00:06:00.902 Test: blob_io_unit ...passed 00:06:00.902 Test: blob_io_unit_compatibility ...passed 00:06:00.902 Test: blob_ext_md_pages ...passed 00:06:00.902 Test: blob_esnap_io_4096_4096 ...passed 00:06:00.902 Test: blob_esnap_io_512_512 ...passed 00:06:01.161 Test: blob_esnap_io_4096_512 ...passed 00:06:01.161 Test: blob_esnap_io_512_4096 ...passed 00:06:01.161 Suite: blob_bs_copy_noextent 00:06:01.161 Test: blob_open ...passed 00:06:01.161 Test: blob_create ...[2024-06-11 05:58:31.720014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:01.161 passed 00:06:01.420 Test: blob_create_loop ...passed 00:06:01.420 Test: blob_create_fail ...[2024-06-11 05:58:31.858654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:01.420 passed 00:06:01.420 Test: blob_create_internal ...passed 00:06:01.420 Test: blob_create_zero_extent ...passed 00:06:01.420 Test: blob_snapshot ...passed 00:06:01.686 Test: blob_clone ...passed 00:06:01.686 Test: blob_inflate ...[2024-06-11 05:58:32.148805] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:01.686 passed 00:06:01.686 Test: blob_delete ...passed 00:06:01.686 Test: blob_resize_test ...[2024-06-11 05:58:32.262175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:01.686 passed 00:06:01.965 Test: channel_ops ...passed 00:06:01.965 Test: blob_super ...passed 00:06:01.965 Test: blob_rw_verify_iov ...passed 00:06:01.965 Test: blob_unmap ...passed 00:06:01.965 Test: blob_iter ...passed 00:06:02.223 Test: blob_parse_md ...passed 00:06:02.223 Test: bs_load_pending_removal ...passed 00:06:02.223 Test: bs_unload ...[2024-06-11 05:58:32.712286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:02.223 passed 00:06:02.223 Test: bs_usable_clusters ...passed 00:06:02.223 Test: blob_crc ...[2024-06-11 05:58:32.824873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:02.223 [2024-06-11 05:58:32.825004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:02.224 passed 00:06:02.483 Test: blob_flags ...passed 00:06:02.483 Test: bs_version ...passed 00:06:02.483 Test: blob_set_xattrs_test ...[2024-06-11 05:58:32.996145] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:02.483 [2024-06-11 05:58:32.996263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:02.483 passed 00:06:02.742 Test: blob_thin_prov_alloc ...passed 00:06:02.742 Test: blob_insert_cluster_msg_test ...passed 00:06:02.742 Test: blob_thin_prov_rw ...passed 00:06:02.742 Test: blob_thin_prov_rle ...passed 00:06:03.002 Test: blob_thin_prov_rw_iov ...passed 00:06:03.002 Test: blob_snapshot_rw ...passed 00:06:03.002 Test: blob_snapshot_rw_iov ...passed 00:06:03.261 Test: blob_inflate_rw ...passed 00:06:03.261 Test: blob_snapshot_freeze_io ...passed 00:06:03.520 Test: blob_operation_split_rw ...passed 00:06:03.521 Test: blob_operation_split_rw_iov ...passed 00:06:03.780 Test: blob_simultaneous_operations ...[2024-06-11 05:58:34.196517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.780 [2024-06-11 05:58:34.196628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.780 [2024-06-11 05:58:34.197230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.780 [2024-06-11 05:58:34.197280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.780 [2024-06-11 05:58:34.200762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.780 [2024-06-11 05:58:34.200825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.780 [2024-06-11 05:58:34.200951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.780 [2024-06-11 05:58:34.200970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.780 passed 00:06:03.780 Test: blob_persist_test ...passed 00:06:03.780 Test: blob_decouple_snapshot ...passed 00:06:03.780 Test: blob_seek_io_unit ...passed 00:06:04.039 Test: blob_nested_freezes ...passed 00:06:04.039 Suite: blob_blob_copy_noextent 00:06:04.039 Test: blob_write ...passed 00:06:04.039 Test: blob_read ...passed 00:06:04.039 Test: blob_rw_verify ...passed 00:06:04.298 Test: blob_rw_verify_iov_nomem ...passed 00:06:04.298 Test: blob_rw_iov_read_only ...passed 00:06:04.298 Test: blob_xattr ...passed 00:06:04.298 Test: blob_dirty_shutdown ...passed 00:06:04.298 Test: blob_is_degraded ...passed 00:06:04.298 Suite: blob_esnap_bs_copy_noextent 00:06:04.556 Test: blob_esnap_create ...passed 00:06:04.556 Test: blob_esnap_thread_add_remove ...passed 00:06:04.556 Test: blob_esnap_clone_snapshot ...passed 00:06:04.556 Test: blob_esnap_clone_inflate ...passed 00:06:04.815 Test: blob_esnap_clone_decouple ...passed 00:06:04.815 Test: blob_esnap_clone_reload ...passed 00:06:04.815 Test: blob_esnap_hotplug ...passed 00:06:04.815 Suite: blob_copy_extent 00:06:04.815 Test: blob_init ...[2024-06-11 05:58:35.348325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:04.815 passed 00:06:04.815 Test: blob_thin_provision ...passed 00:06:04.815 Test: blob_read_only ...passed 00:06:04.815 Test: bs_load ...[2024-06-11 05:58:35.426008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:04.815 passed 00:06:04.815 Test: bs_load_custom_cluster_size ...passed 00:06:05.074 Test: bs_load_after_failed_grow ...passed 00:06:05.074 Test: bs_cluster_sz ...[2024-06-11 05:58:35.465864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:05.074 [2024-06-11 05:58:35.466078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:05.074 [2024-06-11 05:58:35.466115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:05.074 passed 00:06:05.074 Test: bs_resize_md ...passed 00:06:05.074 Test: bs_destroy ...passed 00:06:05.074 Test: bs_type ...passed 00:06:05.074 Test: bs_super_block ...passed 00:06:05.074 Test: bs_test_recover_cluster_count ...passed 00:06:05.074 Test: bs_grow_live ...passed 00:06:05.074 Test: bs_grow_live_no_space ...passed 00:06:05.074 Test: bs_test_grow ...passed 00:06:05.074 Test: blob_serialize_test ...passed 00:06:05.074 Test: super_block_crc ...passed 00:06:05.074 Test: blob_thin_prov_write_count_io ...passed 00:06:05.074 Test: bs_load_iter_test ...passed 00:06:05.074 Test: blob_relations ...[2024-06-11 05:58:35.710500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:05.074 [2024-06-11 05:58:35.710621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.074 [2024-06-11 05:58:35.711643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:05.074 [2024-06-11 05:58:35.711702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.074 passed 00:06:05.333 Test: blob_relations2 ...[2024-06-11 05:58:35.733608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:05.333 [2024-06-11 05:58:35.733707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.333 [2024-06-11 05:58:35.733759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:05.333 [2024-06-11 05:58:35.733791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.333 [2024-06-11 05:58:35.735211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:05.333 [2024-06-11 05:58:35.735277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.333 [2024-06-11 05:58:35.735761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:05.333 [2024-06-11 05:58:35.735819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.333 passed 00:06:05.333 Test: blob_relations3 ...passed 00:06:05.593 Test: blobstore_clean_power_failure ...passed 00:06:05.593 Test: blob_delete_snapshot_power_failure ...[2024-06-11 05:58:36.006707] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:05.593 [2024-06-11 05:58:36.027583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:05.593 [2024-06-11 05:58:36.048593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:05.593 [2024-06-11 05:58:36.048696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:05.593 [2024-06-11 05:58:36.048729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.593 [2024-06-11 05:58:36.075025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:05.593 [2024-06-11 05:58:36.075106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:05.593 [2024-06-11 05:58:36.075130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:05.593 [2024-06-11 05:58:36.075161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.593 [2024-06-11 05:58:36.095147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:05.593 [2024-06-11 05:58:36.095231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:05.593 [2024-06-11 05:58:36.095256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:05.593 [2024-06-11 05:58:36.095316] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.593 [2024-06-11 05:58:36.115473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:05.593 [2024-06-11 05:58:36.115595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.593 [2024-06-11 05:58:36.135513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:05.593 [2024-06-11 05:58:36.135618] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.593 [2024-06-11 05:58:36.155643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:05.593 [2024-06-11 05:58:36.155742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.593 passed 00:06:05.593 Test: blob_create_snapshot_power_failure ...[2024-06-11 05:58:36.215224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:05.593 [2024-06-11 05:58:36.234848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:05.853 [2024-06-11 05:58:36.274670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:05.853 [2024-06-11 05:58:36.294787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:05.853 passed 00:06:05.853 Test: blob_io_unit ...passed 00:06:05.853 Test: blob_io_unit_compatibility ...passed 00:06:05.853 Test: blob_ext_md_pages ...passed 00:06:05.853 Test: blob_esnap_io_4096_4096 ...passed 00:06:06.129 Test: blob_esnap_io_512_512 ...passed 00:06:06.129 Test: blob_esnap_io_4096_512 ...passed 00:06:06.129 Test: blob_esnap_io_512_4096 ...passed 00:06:06.129 Suite: blob_bs_copy_extent 00:06:06.129 Test: blob_open ...passed 00:06:06.129 Test: blob_create ...[2024-06-11 05:58:36.683942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:06.129 passed 00:06:06.387 Test: blob_create_loop ...passed 00:06:06.387 Test: blob_create_fail ...[2024-06-11 05:58:36.824299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:06.387 passed 00:06:06.387 Test: blob_create_internal ...passed 00:06:06.387 Test: blob_create_zero_extent ...passed 00:06:06.387 Test: blob_snapshot ...passed 00:06:06.645 Test: blob_clone ...passed 00:06:06.645 Test: blob_inflate ...[2024-06-11 05:58:37.113409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:06.645 passed 00:06:06.645 Test: blob_delete ...passed 00:06:06.645 Test: blob_resize_test ...[2024-06-11 05:58:37.222208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:06.645 passed 00:06:06.904 Test: channel_ops ...passed 00:06:06.904 Test: blob_super ...passed 00:06:06.904 Test: blob_rw_verify_iov ...passed 00:06:06.904 Test: blob_unmap ...passed 00:06:06.904 Test: blob_iter ...passed 00:06:07.163 Test: blob_parse_md ...passed 00:06:07.163 Test: bs_load_pending_removal ...passed 00:06:07.163 Test: bs_unload ...[2024-06-11 05:58:37.670055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:07.163 passed 00:06:07.163 Test: bs_usable_clusters ...passed 00:06:07.163 Test: blob_crc ...[2024-06-11 05:58:37.782056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:07.163 [2024-06-11 05:58:37.782194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:07.163 passed 00:06:07.422 Test: blob_flags ...passed 00:06:07.422 Test: bs_version ...passed 00:06:07.422 Test: blob_set_xattrs_test ...[2024-06-11 05:58:37.952973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.422 [2024-06-11 05:58:37.953069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.422 passed 00:06:07.681 Test: blob_thin_prov_alloc ...passed 00:06:07.682 Test: blob_insert_cluster_msg_test ...passed 00:06:07.682 Test: blob_thin_prov_rw ...passed 00:06:07.682 Test: blob_thin_prov_rle ...passed 00:06:07.939 Test: blob_thin_prov_rw_iov ...passed 00:06:07.939 Test: blob_snapshot_rw ...passed 00:06:07.939 Test: blob_snapshot_rw_iov ...passed 00:06:08.198 Test: blob_inflate_rw ...passed 00:06:08.198 Test: blob_snapshot_freeze_io ...passed 00:06:08.456 Test: blob_operation_split_rw ...passed 00:06:08.456 Test: blob_operation_split_rw_iov ...passed 00:06:08.716 Test: blob_simultaneous_operations ...[2024-06-11 05:58:39.137644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.716 [2024-06-11 05:58:39.137753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.716 [2024-06-11 05:58:39.138288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.716 [2024-06-11 05:58:39.138340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.716 [2024-06-11 05:58:39.141533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.716 [2024-06-11 05:58:39.141581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.716 [2024-06-11 05:58:39.141700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.716 [2024-06-11 05:58:39.141723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.716 passed 00:06:08.716 Test: blob_persist_test ...passed 00:06:08.716 Test: blob_decouple_snapshot ...passed 00:06:08.716 Test: blob_seek_io_unit ...passed 00:06:08.974 Test: blob_nested_freezes ...passed 00:06:08.974 Suite: blob_blob_copy_extent 00:06:08.974 Test: blob_write ...passed 00:06:08.974 Test: blob_read ...passed 00:06:08.974 Test: blob_rw_verify ...passed 00:06:09.233 Test: blob_rw_verify_iov_nomem ...passed 00:06:09.233 Test: blob_rw_iov_read_only ...passed 00:06:09.233 Test: blob_xattr ...passed 00:06:09.233 Test: blob_dirty_shutdown ...passed 00:06:09.492 Test: blob_is_degraded ...passed 00:06:09.492 Suite: blob_esnap_bs_copy_extent 00:06:09.492 Test: blob_esnap_create ...passed 00:06:09.492 Test: blob_esnap_thread_add_remove ...passed 00:06:09.492 Test: blob_esnap_clone_snapshot ...passed 00:06:09.492 Test: blob_esnap_clone_inflate ...passed 00:06:09.750 Test: blob_esnap_clone_decouple ...passed 00:06:09.750 Test: blob_esnap_clone_reload ...passed 00:06:09.750 Test: blob_esnap_hotplug ...passed 00:06:09.750 00:06:09.750 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.750 suites 16 16 n/a 0 0 00:06:09.750 tests 348 348 348 0 0 00:06:09.750 asserts 92605 92605 92605 0 n/a 00:06:09.750 00:06:09.750 Elapsed time = 20.097 seconds 00:06:10.008 05:58:40 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:10.008 00:06:10.008 00:06:10.008 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.008 http://cunit.sourceforge.net/ 00:06:10.008 00:06:10.008 00:06:10.008 Suite: blob_bdev 00:06:10.008 Test: create_bs_dev ...passed 00:06:10.008 Test: create_bs_dev_ro ...[2024-06-11 05:58:40.433992] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:10.008 passed 00:06:10.008 Test: create_bs_dev_rw ...passed 00:06:10.008 Test: claim_bs_dev ...[2024-06-11 05:58:40.434405] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:10.008 passed 00:06:10.008 Test: claim_bs_dev_ro ...passed 00:06:10.008 Test: deferred_destroy_refs ...passed 00:06:10.008 Test: deferred_destroy_channels ...passed 00:06:10.008 Test: deferred_destroy_threads ...passed 00:06:10.008 00:06:10.008 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.008 suites 1 1 n/a 0 0 00:06:10.008 tests 8 8 8 0 0 00:06:10.008 asserts 119 119 119 0 n/a 00:06:10.008 00:06:10.008 Elapsed time = 0.001 seconds 00:06:10.008 05:58:40 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:10.008 00:06:10.008 00:06:10.008 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.008 http://cunit.sourceforge.net/ 00:06:10.008 00:06:10.008 00:06:10.008 Suite: tree 00:06:10.008 Test: blobfs_tree_op_test ...passed 00:06:10.008 00:06:10.008 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.008 suites 1 1 n/a 0 0 00:06:10.008 tests 1 1 1 0 0 00:06:10.008 asserts 27 27 27 0 n/a 00:06:10.008 00:06:10.009 Elapsed time = 0.000 seconds 00:06:10.009 05:58:40 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:10.009 00:06:10.009 00:06:10.009 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.009 http://cunit.sourceforge.net/ 00:06:10.009 00:06:10.009 00:06:10.009 Suite: blobfs_async_ut 00:06:10.009 Test: fs_init ...passed 00:06:10.009 Test: fs_open ...passed 00:06:10.267 Test: fs_create ...passed 00:06:10.267 Test: fs_truncate ...passed 00:06:10.267 Test: fs_rename ...[2024-06-11 05:58:40.720545] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:10.267 passed 00:06:10.267 Test: fs_rw_async ...passed 00:06:10.267 Test: fs_writev_readv_async ...passed 00:06:10.267 Test: tree_find_buffer_ut ...passed 00:06:10.267 Test: channel_ops ...passed 00:06:10.267 Test: channel_ops_sync ...passed 00:06:10.267 00:06:10.267 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.267 suites 1 1 n/a 0 0 00:06:10.267 tests 10 10 10 0 0 00:06:10.267 asserts 292 292 292 0 n/a 00:06:10.267 00:06:10.267 Elapsed time = 0.285 seconds 00:06:10.267 05:58:40 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:10.267 00:06:10.267 00:06:10.267 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.267 http://cunit.sourceforge.net/ 00:06:10.267 00:06:10.267 00:06:10.267 Suite: blobfs_sync_ut 00:06:10.526 Test: cache_read_after_write ...[2024-06-11 05:58:41.002994] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:10.526 passed 00:06:10.526 Test: file_length ...passed 00:06:10.526 Test: append_write_to_extend_blob ...passed 00:06:10.526 Test: partial_buffer ...passed 00:06:10.526 Test: cache_write_null_buffer ...passed 00:06:10.526 Test: fs_create_sync ...passed 00:06:10.526 Test: fs_rename_sync ...passed 00:06:10.785 Test: cache_append_no_cache ...passed 00:06:10.785 Test: fs_delete_file_without_close ...passed 00:06:10.785 00:06:10.785 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.785 suites 1 1 n/a 0 0 00:06:10.785 tests 9 9 9 0 0 00:06:10.785 asserts 345 345 345 0 n/a 00:06:10.785 00:06:10.785 Elapsed time = 0.566 seconds 00:06:10.785 05:58:41 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:10.785 00:06:10.785 00:06:10.786 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.786 http://cunit.sourceforge.net/ 00:06:10.786 00:06:10.786 00:06:10.786 Suite: blobfs_bdev_ut 00:06:10.786 Test: spdk_blobfs_bdev_detect_test ...[2024-06-11 05:58:41.269473] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:10.786 passed 00:06:10.786 Test: spdk_blobfs_bdev_create_test ...[2024-06-11 05:58:41.270120] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:10.786 passed 00:06:10.786 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:10.786 00:06:10.786 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.786 suites 1 1 n/a 0 0 00:06:10.786 tests 3 3 3 0 0 00:06:10.786 asserts 9 9 9 0 n/a 00:06:10.786 00:06:10.786 Elapsed time = 0.001 seconds 00:06:10.786 00:06:10.786 real 0m21.142s 00:06:10.786 user 0m20.472s 00:06:10.786 sys 0m0.966s 00:06:10.786 05:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.786 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.786 ************************************ 00:06:10.786 END TEST unittest_blob_blobfs 00:06:10.786 ************************************ 00:06:10.786 05:58:41 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:10.786 05:58:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.786 05:58:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.786 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.786 ************************************ 00:06:10.786 START TEST unittest_event 00:06:10.786 ************************************ 00:06:10.786 05:58:41 -- common/autotest_common.sh@1104 -- # unittest_event 00:06:10.786 05:58:41 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:10.786 00:06:10.786 00:06:10.786 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.786 http://cunit.sourceforge.net/ 00:06:10.786 00:06:10.786 00:06:10.786 Suite: app_suite 00:06:10.786 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:10.786 options: 00:06:10.786 -c, --config JSON config file (default none) 00:06:10.786 --json JSON config file (default none) 00:06:10.786 --json-ignore-init-errors 00:06:10.786 don't exit on invalid config entry 00:06:10.786 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:10.786 -g, --single-file-segments 00:06:10.786 force creating just one hugetlbfs file 00:06:10.786 -h, --help show this usage 00:06:10.786 -i, --shm-id shared memory ID (optional) 00:06:10.786 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:10.786 --lcores lcore to CPU mapping list. The list is in the format: 00:06:10.786 [<,lcores[@CPUs]>...] 00:06:10.786 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:10.786 Within the group, '-' is used for range separator, 00:06:10.786 ',' is used for single number separator. 00:06:10.786 '( )' can be omitted for single element group, 00:06:10.786 '@' can be omitted if cpus and lcores have the same value 00:06:10.786 -n, --mem-channels channel number of memory channels used for DPDK 00:06:10.786 -p, --main-core main (primary) core for DPDK 00:06:10.786 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:10.786 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:10.786 --disable-cpumask-locks Disable CPU core lock files. 00:06:10.786 app_ut: invalid option -- 'z' 00:06:10.786 --silence-noticelog disable notice level logging to stderr 00:06:10.786 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:10.786 -u, --no-pci disable PCI access 00:06:10.786 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:10.786 --max-delay maximum reactor delay (in microseconds) 00:06:10.786 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:10.786 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:10.786 -R, --huge-unlink unlink huge files after initialization 00:06:10.786 -v, --version print SPDK version 00:06:10.786 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:10.786 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:10.786 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:10.786 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:10.786 Tracepoints vary in size and can use more than one trace entry. 00:06:10.786 --rpcs-allowed comma-separated list of permitted RPCS 00:06:10.786 --env-context Opaque context for use of the env implementation 00:06:10.786 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:10.786 --no-huge run without using hugepages 00:06:10.786 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:10.786 -e, --tpoint-group [:] 00:06:10.786 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:10.786 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:10.786 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:10.786 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:10.786 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:10.786 app_ut [options] 00:06:10.786 options: 00:06:10.786 -c, --config JSON config file (default none) 00:06:10.786 --json JSON config file (default none) 00:06:10.786 --json-ignore-init-errors 00:06:10.786 don't exit on invalid config entry 00:06:10.786 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:10.786 -g, --single-file-segments 00:06:10.786 force creating just one hugetlbfs file 00:06:10.786 -h, --help show this usage 00:06:10.786 -i, --shm-id shared memory ID (optional) 00:06:10.786 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:10.786 --lcores lcore to CPU mapping list. The list is in the format: 00:06:10.786 [<,lcores[@CPUs]>...] 00:06:10.786 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:10.786 Within the group, '-' is used for range separator, 00:06:10.786 ',' is used for single number separator. 00:06:10.786 '( )' can be omitted for single element group, 00:06:10.786 '@' can be omitted if cpus and lcores have the same value 00:06:10.786 -n, --mem-channels channel number of memory channels used for DPDK 00:06:10.786 -p, --main-core main (primary) core for DPDK 00:06:10.786 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:10.786 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:10.786 --disable-cpumask-locks Disable CPU core lock files. 00:06:10.786 --silence-noticelog disable notice level logging to stderr 00:06:10.786 app_ut: unrecognized option '--test-long-opt' 00:06:10.786 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:10.786 -u, --no-pci disable PCI access 00:06:10.786 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:10.786 --max-delay maximum reactor delay (in microseconds) 00:06:10.786 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:10.786 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:10.786 -R, --huge-unlink unlink huge files after initialization 00:06:10.786 -v, --version print SPDK version 00:06:10.786 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:10.786 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:10.786 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:10.786 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:10.786 Tracepoints vary in size and can use more than one trace entry. 00:06:10.786 --rpcs-allowed comma-separated list of permitted RPCS 00:06:10.786 --env-context Opaque context for use of the env implementation 00:06:10.786 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:10.786 --no-huge run without using hugepages 00:06:10.786 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:10.786 -e, --tpoint-group [:] 00:06:10.786 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:10.786 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:10.786 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:10.786 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:10.786 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:10.786 [2024-06-11 05:58:41.374946] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:10.786 [2024-06-11 05:58:41.375406] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:10.786 app_ut [options] 00:06:10.786 options: 00:06:10.786 -c, --config JSON config file (default none) 00:06:10.786 --json JSON config file (default none) 00:06:10.786 --json-ignore-init-errors 00:06:10.786 don't exit on invalid config entry 00:06:10.786 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:10.786 -g, --single-file-segments 00:06:10.787 force creating just one hugetlbfs file 00:06:10.787 -h, --help show this usage 00:06:10.787 -i, --shm-id shared memory ID (optional) 00:06:10.787 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:10.787 --lcores lcore to CPU mapping list. The list is in the format: 00:06:10.787 [<,lcores[@CPUs]>...] 00:06:10.787 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:10.787 Within the group, '-' is used for range separator, 00:06:10.787 ',' is used for single number separator. 00:06:10.787 '( )' can be omitted for single element group, 00:06:10.787 '@' can be omitted if cpus and lcores have the same value 00:06:10.787 -n, --mem-channels channel number of memory channels used for DPDK 00:06:10.787 -p, --main-core main (primary) core for DPDK 00:06:10.787 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:10.787 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:10.787 --disable-cpumask-locks Disable CPU core lock files. 00:06:10.787 --silence-noticelog disable notice level logging to stderr 00:06:10.787 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:10.787 -u, --no-pci disable PCI access 00:06:10.787 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:10.787 --max-delay maximum reactor delay (in microseconds) 00:06:10.787 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:10.787 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:10.787 -R, --huge-unlink unlink huge files after initialization 00:06:10.787 -v, --version print SPDK version 00:06:10.787 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:10.787 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:10.787 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:10.787 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:10.787 Tracepoints vary in size and can use more than one trace entry. 00:06:10.787 --rpcs-allowed comma-separated list of permitted RPCS 00:06:10.787 --env-context Opaque context for use of the env implementation 00:06:10.787 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:10.787 --no-huge run without using hugepages 00:06:10.787 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:10.787 -e, --tpoint-group [:] 00:06:10.787 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:10.787 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:10.787 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:10.787 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:10.787 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:10.787 passed 00:06:10.787 00:06:10.787 [2024-06-11 05:58:41.375726] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:10.787 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.787 suites 1 1 n/a 0 0 00:06:10.787 tests 1 1 1 0 0 00:06:10.787 asserts 8 8 8 0 n/a 00:06:10.787 00:06:10.787 Elapsed time = 0.002 seconds 00:06:10.787 05:58:41 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:10.787 00:06:10.787 00:06:10.787 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.787 http://cunit.sourceforge.net/ 00:06:10.787 00:06:10.787 00:06:10.787 Suite: app_suite 00:06:10.787 Test: test_create_reactor ...passed 00:06:10.787 Test: test_init_reactors ...passed 00:06:10.787 Test: test_event_call ...passed 00:06:10.787 Test: test_schedule_thread ...passed 00:06:10.787 Test: test_reschedule_thread ...passed 00:06:10.787 Test: test_bind_thread ...passed 00:06:10.787 Test: test_for_each_reactor ...passed 00:06:11.045 Test: test_reactor_stats ...passed 00:06:11.045 Test: test_scheduler ...passed 00:06:11.045 Test: test_governor ...passed 00:06:11.045 00:06:11.045 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.045 suites 1 1 n/a 0 0 00:06:11.045 tests 10 10 10 0 0 00:06:11.045 asserts 344 344 344 0 n/a 00:06:11.045 00:06:11.046 Elapsed time = 0.022 seconds 00:06:11.046 00:06:11.046 real 0m0.115s 00:06:11.046 user 0m0.065s 00:06:11.046 sys 0m0.051s 00:06:11.046 05:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.046 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.046 ************************************ 00:06:11.046 END TEST unittest_event 00:06:11.046 ************************************ 00:06:11.046 05:58:41 -- unit/unittest.sh@233 -- # uname -s 00:06:11.046 05:58:41 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:11.046 05:58:41 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:11.046 05:58:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.046 05:58:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.046 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:06:11.046 ************************************ 00:06:11.046 START TEST unittest_ftl 00:06:11.046 ************************************ 00:06:11.046 05:58:41 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:06:11.046 05:58:41 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:11.046 00:06:11.046 00:06:11.046 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.046 http://cunit.sourceforge.net/ 00:06:11.046 00:06:11.046 00:06:11.046 Suite: ftl_band_suite 00:06:11.046 Test: test_band_block_offset_from_addr_base ...passed 00:06:11.046 Test: test_band_block_offset_from_addr_offset ...passed 00:06:11.347 Test: test_band_addr_from_block_offset ...passed 00:06:11.347 Test: test_band_set_addr ...passed 00:06:11.347 Test: test_invalidate_addr ...passed 00:06:11.347 Test: test_next_xfer_addr ...passed 00:06:11.347 00:06:11.347 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.347 suites 1 1 n/a 0 0 00:06:11.347 tests 6 6 6 0 0 00:06:11.347 asserts 30356 30356 30356 0 n/a 00:06:11.347 00:06:11.347 Elapsed time = 0.260 seconds 00:06:11.347 05:58:41 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:11.347 00:06:11.347 00:06:11.347 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.347 http://cunit.sourceforge.net/ 00:06:11.347 00:06:11.347 00:06:11.347 Suite: ftl_bitmap 00:06:11.347 Test: test_ftl_bitmap_create ...[2024-06-11 05:58:41.933315] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:11.347 passed 00:06:11.347 Test: test_ftl_bitmap_get ...[2024-06-11 05:58:41.933729] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:11.347 passed 00:06:11.347 Test: test_ftl_bitmap_set ...passed 00:06:11.347 Test: test_ftl_bitmap_clear ...passed 00:06:11.347 Test: test_ftl_bitmap_find_first_set ...passed 00:06:11.347 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:11.347 Test: test_ftl_bitmap_count_set ...passed 00:06:11.347 00:06:11.347 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.347 suites 1 1 n/a 0 0 00:06:11.347 tests 7 7 7 0 0 00:06:11.347 asserts 137 137 137 0 n/a 00:06:11.347 00:06:11.347 Elapsed time = 0.001 seconds 00:06:11.347 05:58:41 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:11.347 00:06:11.347 00:06:11.347 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.347 http://cunit.sourceforge.net/ 00:06:11.347 00:06:11.347 00:06:11.347 Suite: ftl_io_suite 00:06:11.347 Test: test_completion ...passed 00:06:11.347 Test: test_multiple_ios ...passed 00:06:11.347 00:06:11.347 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.347 suites 1 1 n/a 0 0 00:06:11.347 tests 2 2 2 0 0 00:06:11.347 asserts 47 47 47 0 n/a 00:06:11.347 00:06:11.347 Elapsed time = 0.003 seconds 00:06:11.607 05:58:41 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:11.607 00:06:11.607 00:06:11.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.608 http://cunit.sourceforge.net/ 00:06:11.608 00:06:11.608 00:06:11.608 Suite: ftl_mngt 00:06:11.608 Test: test_next_step ...passed 00:06:11.608 Test: test_continue_step ...passed 00:06:11.608 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:11.608 Test: test_fail_step ...passed 00:06:11.608 Test: test_mngt_call_and_call_rollback ...passed 00:06:11.608 Test: test_nested_process_failure ...passed 00:06:11.608 00:06:11.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.608 suites 1 1 n/a 0 0 00:06:11.608 tests 6 6 6 0 0 00:06:11.608 asserts 176 176 176 0 n/a 00:06:11.608 00:06:11.608 Elapsed time = 0.002 seconds 00:06:11.608 05:58:42 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:11.608 00:06:11.608 00:06:11.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.608 http://cunit.sourceforge.net/ 00:06:11.608 00:06:11.608 00:06:11.608 Suite: ftl_mempool 00:06:11.608 Test: test_ftl_mempool_create ...passed 00:06:11.608 Test: test_ftl_mempool_get_put ...passed 00:06:11.608 00:06:11.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.608 suites 1 1 n/a 0 0 00:06:11.608 tests 2 2 2 0 0 00:06:11.608 asserts 36 36 36 0 n/a 00:06:11.608 00:06:11.608 Elapsed time = 0.000 seconds 00:06:11.608 05:58:42 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:11.608 00:06:11.608 00:06:11.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.608 http://cunit.sourceforge.net/ 00:06:11.608 00:06:11.608 00:06:11.608 Suite: ftl_addr64_suite 00:06:11.608 Test: test_addr_cached ...passed 00:06:11.608 00:06:11.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.608 suites 1 1 n/a 0 0 00:06:11.608 tests 1 1 1 0 0 00:06:11.608 asserts 1536 1536 1536 0 n/a 00:06:11.608 00:06:11.608 Elapsed time = 0.001 seconds 00:06:11.608 05:58:42 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:11.608 00:06:11.608 00:06:11.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.608 http://cunit.sourceforge.net/ 00:06:11.608 00:06:11.608 00:06:11.608 Suite: ftl_sb 00:06:11.608 Test: test_sb_crc_v2 ...passed 00:06:11.608 Test: test_sb_crc_v3 ...passed 00:06:11.608 Test: test_sb_v3_md_layout ...[2024-06-11 05:58:42.136179] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:11.608 [2024-06-11 05:58:42.136631] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:11.608 [2024-06-11 05:58:42.136703] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:11.608 [2024-06-11 05:58:42.136762] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:11.608 [2024-06-11 05:58:42.136831] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:11.608 [2024-06-11 05:58:42.136989] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:11.608 [2024-06-11 05:58:42.137077] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:11.608 [2024-06-11 05:58:42.137154] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:11.608 [2024-06-11 05:58:42.137266] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:11.608 passed 00:06:11.608 Test: test_sb_v5_md_layout ...[2024-06-11 05:58:42.137319] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:11.608 [2024-06-11 05:58:42.137368] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:11.608 passed 00:06:11.608 00:06:11.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.608 suites 1 1 n/a 0 0 00:06:11.608 tests 4 4 4 0 0 00:06:11.608 asserts 148 148 148 0 n/a 00:06:11.608 00:06:11.608 Elapsed time = 0.003 seconds 00:06:11.608 05:58:42 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:11.608 00:06:11.608 00:06:11.608 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.608 http://cunit.sourceforge.net/ 00:06:11.608 00:06:11.608 00:06:11.608 Suite: ftl_layout_upgrade 00:06:11.608 Test: test_l2p_upgrade ...passed 00:06:11.608 00:06:11.608 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.608 suites 1 1 n/a 0 0 00:06:11.608 tests 1 1 1 0 0 00:06:11.608 asserts 140 140 140 0 n/a 00:06:11.608 00:06:11.608 Elapsed time = 0.001 seconds 00:06:11.608 00:06:11.608 real 0m0.666s 00:06:11.608 user 0m0.293s 00:06:11.608 sys 0m0.376s 00:06:11.608 05:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.608 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.608 ************************************ 00:06:11.608 END TEST unittest_ftl 00:06:11.608 ************************************ 00:06:11.922 05:58:42 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:11.922 05:58:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.922 05:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.922 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.922 ************************************ 00:06:11.922 START TEST unittest_accel 00:06:11.922 ************************************ 00:06:11.922 05:58:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:11.922 00:06:11.922 00:06:11.922 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.922 http://cunit.sourceforge.net/ 00:06:11.922 00:06:11.922 00:06:11.922 Suite: accel_sequence 00:06:11.922 Test: test_sequence_fill_copy ...passed 00:06:11.922 Test: test_sequence_abort ...passed 00:06:11.922 Test: test_sequence_append_error ...passed 00:06:11.922 Test: test_sequence_completion_error ...[2024-06-11 05:58:42.298317] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7feaaf69c7c0 00:06:11.922 [2024-06-11 05:58:42.298691] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7feaaf69c7c0 00:06:11.922 [2024-06-11 05:58:42.298740] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7feaaf69c7c0 00:06:11.922 passed 00:06:11.922 Test: test_sequence_decompress ...[2024-06-11 05:58:42.298806] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7feaaf69c7c0 00:06:11.922 passed 00:06:11.922 Test: test_sequence_reverse ...passed 00:06:11.922 Test: test_sequence_copy_elision ...passed 00:06:11.922 Test: test_sequence_accel_buffers ...passed 00:06:11.922 Test: test_sequence_memory_domain ...[2024-06-11 05:58:42.310453] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:11.922 [2024-06-11 05:58:42.310649] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:11.922 passed 00:06:11.922 Test: test_sequence_module_memory_domain ...passed 00:06:11.922 Test: test_sequence_crypto ...passed 00:06:11.922 Test: test_sequence_driver ...[2024-06-11 05:58:42.317577] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7feaaea747c0 using driver: ut 00:06:11.922 [2024-06-11 05:58:42.317678] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7feaaea747c0 through driver: ut 00:06:11.922 passed 00:06:11.922 Test: test_sequence_same_iovs ...passed 00:06:11.922 Test: test_sequence_crc32 ...passed 00:06:11.922 Suite: accel 00:06:11.922 Test: test_spdk_accel_task_complete ...passed 00:06:11.922 Test: test_get_task ...passed 00:06:11.922 Test: test_spdk_accel_submit_copy ...passed 00:06:11.922 Test: test_spdk_accel_submit_dualcast ...[2024-06-11 05:58:42.322678] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:11.922 [2024-06-11 05:58:42.322738] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:11.922 passed 00:06:11.922 Test: test_spdk_accel_submit_compare ...passed 00:06:11.922 Test: test_spdk_accel_submit_fill ...passed 00:06:11.922 Test: test_spdk_accel_submit_crc32c ...passed 00:06:11.922 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:11.922 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:11.922 Test: test_spdk_accel_submit_xor ...passed 00:06:11.922 Test: test_spdk_accel_module_find_by_name ...passed 00:06:11.922 Test: test_spdk_accel_module_register ...passed 00:06:11.922 00:06:11.922 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.922 suites 2 2 n/a 0 0 00:06:11.922 tests 26 26 26 0 0 00:06:11.922 asserts 831 831 831 0 n/a 00:06:11.922 00:06:11.922 Elapsed time = 0.036 seconds 00:06:11.922 00:06:11.922 real 0m0.087s 00:06:11.922 user 0m0.044s 00:06:11.922 sys 0m0.044s 00:06:11.923 05:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.923 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.923 ************************************ 00:06:11.923 END TEST unittest_accel 00:06:11.923 ************************************ 00:06:11.923 05:58:42 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:11.923 05:58:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.923 05:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.923 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.923 ************************************ 00:06:11.923 START TEST unittest_ioat 00:06:11.923 ************************************ 00:06:11.923 05:58:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:11.923 00:06:11.923 00:06:11.923 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.923 http://cunit.sourceforge.net/ 00:06:11.923 00:06:11.923 00:06:11.923 Suite: ioat 00:06:11.923 Test: ioat_state_check ...passed 00:06:11.923 00:06:11.923 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.923 suites 1 1 n/a 0 0 00:06:11.923 tests 1 1 1 0 0 00:06:11.923 asserts 32 32 32 0 n/a 00:06:11.923 00:06:11.923 Elapsed time = 0.000 seconds 00:06:11.923 00:06:11.923 real 0m0.037s 00:06:11.923 user 0m0.031s 00:06:11.923 sys 0m0.006s 00:06:11.923 05:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.923 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.923 ************************************ 00:06:11.923 END TEST unittest_ioat 00:06:11.923 ************************************ 00:06:11.923 05:58:42 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:11.923 05:58:42 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:11.923 05:58:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.923 05:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.923 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.923 ************************************ 00:06:11.923 START TEST unittest_idxd_user 00:06:11.923 ************************************ 00:06:11.923 05:58:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:11.923 00:06:11.923 00:06:11.923 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.923 http://cunit.sourceforge.net/ 00:06:11.923 00:06:11.923 00:06:11.923 Suite: idxd_user 00:06:11.923 Test: test_idxd_wait_cmd ...[2024-06-11 05:58:42.526225] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:11.923 [2024-06-11 05:58:42.526697] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:11.923 passed 00:06:11.923 Test: test_idxd_reset_dev ...[2024-06-11 05:58:42.527191] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:11.923 [2024-06-11 05:58:42.527388] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:11.923 passed 00:06:11.923 Test: test_idxd_group_config ...passed 00:06:11.923 Test: test_idxd_wq_config ...passed 00:06:11.923 00:06:11.923 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.923 suites 1 1 n/a 0 0 00:06:11.923 tests 4 4 4 0 0 00:06:11.923 asserts 20 20 20 0 n/a 00:06:11.923 00:06:11.923 Elapsed time = 0.001 seconds 00:06:12.201 00:06:12.201 real 0m0.034s 00:06:12.201 user 0m0.020s 00:06:12.201 sys 0m0.012s 00:06:12.201 05:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.201 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.201 ************************************ 00:06:12.201 END TEST unittest_idxd_user 00:06:12.201 ************************************ 00:06:12.201 05:58:42 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:12.201 05:58:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.201 05:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.201 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.201 ************************************ 00:06:12.201 START TEST unittest_iscsi 00:06:12.201 ************************************ 00:06:12.201 05:58:42 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:06:12.201 05:58:42 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:12.201 00:06:12.201 00:06:12.201 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.201 http://cunit.sourceforge.net/ 00:06:12.201 00:06:12.201 00:06:12.201 Suite: conn_suite 00:06:12.201 Test: read_task_split_in_order_case ...passed 00:06:12.201 Test: read_task_split_reverse_order_case ...passed 00:06:12.201 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:12.201 Test: process_non_read_task_completion_test ...passed 00:06:12.201 Test: free_tasks_on_connection ...passed 00:06:12.201 Test: free_tasks_with_queued_datain ...passed 00:06:12.201 Test: abort_queued_datain_task_test ...passed 00:06:12.201 Test: abort_queued_datain_tasks_test ...passed 00:06:12.201 00:06:12.201 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.201 suites 1 1 n/a 0 0 00:06:12.201 tests 8 8 8 0 0 00:06:12.202 asserts 230 230 230 0 n/a 00:06:12.202 00:06:12.202 Elapsed time = 0.000 seconds 00:06:12.202 05:58:42 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:12.202 00:06:12.202 00:06:12.202 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.202 http://cunit.sourceforge.net/ 00:06:12.202 00:06:12.202 00:06:12.202 Suite: iscsi_suite 00:06:12.202 Test: param_negotiation_test ...passed 00:06:12.202 Test: list_negotiation_test ...passed 00:06:12.202 Test: parse_valid_test ...passed 00:06:12.202 Test: parse_invalid_test ...[2024-06-11 05:58:42.682669] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:12.202 [2024-06-11 05:58:42.683168] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:12.202 [2024-06-11 05:58:42.683297] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:12.202 [2024-06-11 05:58:42.683456] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:12.202 [2024-06-11 05:58:42.683725] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:12.202 [2024-06-11 05:58:42.683836] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:12.202 [2024-06-11 05:58:42.684074] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:12.202 passed 00:06:12.202 00:06:12.202 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.202 suites 1 1 n/a 0 0 00:06:12.202 tests 4 4 4 0 0 00:06:12.202 asserts 161 161 161 0 n/a 00:06:12.202 00:06:12.202 Elapsed time = 0.007 seconds 00:06:12.202 05:58:42 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:12.202 00:06:12.202 00:06:12.202 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.202 http://cunit.sourceforge.net/ 00:06:12.202 00:06:12.202 00:06:12.202 Suite: iscsi_target_node_suite 00:06:12.202 Test: add_lun_test_cases ...[2024-06-11 05:58:42.725895] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:12.202 [2024-06-11 05:58:42.726321] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:12.202 [2024-06-11 05:58:42.726448] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:12.202 [2024-06-11 05:58:42.726500] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:12.202 [2024-06-11 05:58:42.726544] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:12.202 passed 00:06:12.202 Test: allow_any_allowed ...passed 00:06:12.202 Test: allow_ipv6_allowed ...passed 00:06:12.202 Test: allow_ipv6_denied ...passed 00:06:12.202 Test: allow_ipv6_invalid ...passed 00:06:12.202 Test: allow_ipv4_allowed ...passed 00:06:12.202 Test: allow_ipv4_denied ...passed 00:06:12.202 Test: allow_ipv4_invalid ...passed 00:06:12.202 Test: node_access_allowed ...passed 00:06:12.202 Test: node_access_denied_by_empty_netmask ...passed 00:06:12.202 Test: node_access_multi_initiator_groups_cases ...passed 00:06:12.202 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:12.202 Test: chap_param_test_cases ...[2024-06-11 05:58:42.727070] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:12.202 [2024-06-11 05:58:42.727122] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:12.202 passed[2024-06-11 05:58:42.727200] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:12.202 [2024-06-11 05:58:42.727256] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:12.202 [2024-06-11 05:58:42.727309] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:12.202 00:06:12.202 00:06:12.202 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.202 suites 1 1 n/a 0 0 00:06:12.202 tests 13 13 13 0 0 00:06:12.202 asserts 50 50 50 0 n/a 00:06:12.202 00:06:12.202 Elapsed time = 0.001 seconds 00:06:12.202 05:58:42 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:12.202 00:06:12.202 00:06:12.202 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.202 http://cunit.sourceforge.net/ 00:06:12.202 00:06:12.202 00:06:12.202 Suite: iscsi_suite 00:06:12.202 Test: op_login_check_target_test ...[2024-06-11 05:58:42.773658] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:12.202 passed 00:06:12.202 Test: op_login_session_normal_test ...[2024-06-11 05:58:42.774276] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:12.202 [2024-06-11 05:58:42.774358] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:12.202 [2024-06-11 05:58:42.774432] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:12.202 [2024-06-11 05:58:42.774528] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:12.202 [2024-06-11 05:58:42.774702] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:12.202 [2024-06-11 05:58:42.774893] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:12.202 [2024-06-11 05:58:42.775098] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:12.202 passed 00:06:12.202 Test: maxburstlength_test ...[2024-06-11 05:58:42.775500] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:12.202 [2024-06-11 05:58:42.775579] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:12.202 passed 00:06:12.202 Test: underflow_for_read_transfer_test ...passed 00:06:12.202 Test: underflow_for_zero_read_transfer_test ...passed 00:06:12.202 Test: underflow_for_request_sense_test ...passed 00:06:12.202 Test: underflow_for_check_condition_test ...passed 00:06:12.202 Test: add_transfer_task_test ...passed 00:06:12.202 Test: get_transfer_task_test ...passed 00:06:12.202 Test: del_transfer_task_test ...passed 00:06:12.202 Test: clear_all_transfer_tasks_test ...passed 00:06:12.202 Test: build_iovs_test ...passed 00:06:12.202 Test: build_iovs_with_md_test ...passed 00:06:12.202 Test: pdu_hdr_op_login_test ...[2024-06-11 05:58:42.777766] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:12.202 [2024-06-11 05:58:42.777958] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:12.202 [2024-06-11 05:58:42.778096] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:12.202 passed 00:06:12.202 Test: pdu_hdr_op_text_test ...[2024-06-11 05:58:42.778248] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:12.202 [2024-06-11 05:58:42.778405] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:12.202 [2024-06-11 05:58:42.778486] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:12.202 passed 00:06:12.202 Test: pdu_hdr_op_logout_test ...passed 00:06:12.202 Test: pdu_hdr_op_scsi_test ...[2024-06-11 05:58:42.778609] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:12.202 [2024-06-11 05:58:42.778855] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:12.202 [2024-06-11 05:58:42.778930] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:12.202 [2024-06-11 05:58:42.779008] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:12.202 [2024-06-11 05:58:42.779139] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:12.202 [2024-06-11 05:58:42.779291] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:12.202 passed 00:06:12.202 Test: pdu_hdr_op_task_mgmt_test ...[2024-06-11 05:58:42.779540] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:12.202 [2024-06-11 05:58:42.779670] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:12.202 [2024-06-11 05:58:42.779771] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:12.202 passed 00:06:12.202 Test: pdu_hdr_op_nopout_test ...[2024-06-11 05:58:42.780053] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:12.202 [2024-06-11 05:58:42.780166] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:12.202 [2024-06-11 05:58:42.780203] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:12.202 passed 00:06:12.202 Test: pdu_hdr_op_data_test ...[2024-06-11 05:58:42.780240] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:12.202 [2024-06-11 05:58:42.780292] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:12.202 [2024-06-11 05:58:42.780381] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:12.202 [2024-06-11 05:58:42.780472] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:12.203 [2024-06-11 05:58:42.780554] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:12.203 [2024-06-11 05:58:42.780631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:12.203 [2024-06-11 05:58:42.780752] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:12.203 [2024-06-11 05:58:42.780817] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:12.203 passed 00:06:12.203 Test: empty_text_with_cbit_test ...passed 00:06:12.203 Test: pdu_payload_read_test ...[2024-06-11 05:58:42.783290] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:12.203 passed 00:06:12.203 Test: data_out_pdu_sequence_test ...passed 00:06:12.203 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:12.203 00:06:12.203 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.203 suites 1 1 n/a 0 0 00:06:12.203 tests 24 24 24 0 0 00:06:12.203 asserts 150253 150253 150253 0 n/a 00:06:12.203 00:06:12.203 Elapsed time = 0.020 seconds 00:06:12.203 05:58:42 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:12.203 00:06:12.203 00:06:12.203 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.203 http://cunit.sourceforge.net/ 00:06:12.203 00:06:12.203 00:06:12.203 Suite: init_grp_suite 00:06:12.203 Test: create_initiator_group_success_case ...passed 00:06:12.203 Test: find_initiator_group_success_case ...passed 00:06:12.203 Test: register_initiator_group_twice_case ...passed 00:06:12.203 Test: add_initiator_name_success_case ...passed 00:06:12.203 Test: add_initiator_name_fail_case ...[2024-06-11 05:58:42.835630] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:12.203 passed 00:06:12.203 Test: delete_all_initiator_names_success_case ...passed 00:06:12.203 Test: add_netmask_success_case ...passed 00:06:12.203 Test: add_netmask_fail_case ...passed 00:06:12.203 Test: delete_all_netmasks_success_case ...passed 00:06:12.203 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:12.203 Test: netmask_overwrite_all_to_any_case ...passed 00:06:12.203 Test: add_delete_initiator_names_case ...[2024-06-11 05:58:42.836189] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:12.203 passed 00:06:12.203 Test: add_duplicated_initiator_names_case ...passed 00:06:12.203 Test: delete_nonexisting_initiator_names_case ...passed 00:06:12.203 Test: add_delete_netmasks_case ...passed 00:06:12.203 Test: add_duplicated_netmasks_case ...passed 00:06:12.203 Test: delete_nonexisting_netmasks_case ...passed 00:06:12.203 00:06:12.203 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.203 suites 1 1 n/a 0 0 00:06:12.203 tests 17 17 17 0 0 00:06:12.203 asserts 108 108 108 0 n/a 00:06:12.203 00:06:12.203 Elapsed time = 0.001 seconds 00:06:12.461 05:58:42 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:12.461 00:06:12.461 00:06:12.461 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.461 http://cunit.sourceforge.net/ 00:06:12.461 00:06:12.461 00:06:12.461 Suite: portal_grp_suite 00:06:12.461 Test: portal_create_ipv4_normal_case ...passed 00:06:12.461 Test: portal_create_ipv6_normal_case ...passed 00:06:12.461 Test: portal_create_ipv4_wildcard_case ...passed 00:06:12.461 Test: portal_create_ipv6_wildcard_case ...passed 00:06:12.461 Test: portal_create_twice_case ...[2024-06-11 05:58:42.877530] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:12.461 passed 00:06:12.461 Test: portal_grp_register_unregister_case ...passed 00:06:12.461 Test: portal_grp_register_twice_case ...passed 00:06:12.461 Test: portal_grp_add_delete_case ...passed 00:06:12.461 Test: portal_grp_add_delete_twice_case ...passed 00:06:12.461 00:06:12.461 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.461 suites 1 1 n/a 0 0 00:06:12.461 tests 9 9 9 0 0 00:06:12.461 asserts 44 44 44 0 n/a 00:06:12.461 00:06:12.461 Elapsed time = 0.005 seconds 00:06:12.461 00:06:12.461 real 0m0.290s 00:06:12.461 user 0m0.156s 00:06:12.461 sys 0m0.136s 00:06:12.461 05:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.461 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.461 ************************************ 00:06:12.461 END TEST unittest_iscsi 00:06:12.461 ************************************ 00:06:12.461 05:58:42 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:12.461 05:58:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.461 05:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.461 05:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.461 ************************************ 00:06:12.461 START TEST unittest_json 00:06:12.461 ************************************ 00:06:12.461 05:58:42 -- common/autotest_common.sh@1104 -- # unittest_json 00:06:12.461 05:58:42 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:12.461 00:06:12.461 00:06:12.462 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.462 http://cunit.sourceforge.net/ 00:06:12.462 00:06:12.462 00:06:12.462 Suite: json 00:06:12.462 Test: test_parse_literal ...passed 00:06:12.462 Test: test_parse_string_simple ...passed 00:06:12.462 Test: test_parse_string_control_chars ...passed 00:06:12.462 Test: test_parse_string_utf8 ...passed 00:06:12.462 Test: test_parse_string_escapes_twochar ...passed 00:06:12.462 Test: test_parse_string_escapes_unicode ...passed 00:06:12.462 Test: test_parse_number ...passed 00:06:12.462 Test: test_parse_array ...passed 00:06:12.462 Test: test_parse_object ...passed 00:06:12.462 Test: test_parse_nesting ...passed 00:06:12.462 Test: test_parse_comment ...passed 00:06:12.462 00:06:12.462 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.462 suites 1 1 n/a 0 0 00:06:12.462 tests 11 11 11 0 0 00:06:12.462 asserts 1516 1516 1516 0 n/a 00:06:12.462 00:06:12.462 Elapsed time = 0.001 seconds 00:06:12.462 05:58:42 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:12.462 00:06:12.462 00:06:12.462 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.462 http://cunit.sourceforge.net/ 00:06:12.462 00:06:12.462 00:06:12.462 Suite: json 00:06:12.462 Test: test_strequal ...passed 00:06:12.462 Test: test_num_to_uint16 ...passed 00:06:12.462 Test: test_num_to_int32 ...passed 00:06:12.462 Test: test_num_to_uint64 ...passed 00:06:12.462 Test: test_decode_object ...passed 00:06:12.462 Test: test_decode_array ...passed 00:06:12.462 Test: test_decode_bool ...passed 00:06:12.462 Test: test_decode_uint16 ...passed 00:06:12.462 Test: test_decode_int32 ...passed 00:06:12.462 Test: test_decode_uint32 ...passed 00:06:12.462 Test: test_decode_uint64 ...passed 00:06:12.462 Test: test_decode_string ...passed 00:06:12.462 Test: test_decode_uuid ...passed 00:06:12.462 Test: test_find ...passed 00:06:12.462 Test: test_find_array ...passed 00:06:12.462 Test: test_iterating ...passed 00:06:12.462 Test: test_free_object ...passed 00:06:12.462 00:06:12.462 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.462 suites 1 1 n/a 0 0 00:06:12.462 tests 17 17 17 0 0 00:06:12.462 asserts 236 236 236 0 n/a 00:06:12.462 00:06:12.462 Elapsed time = 0.001 seconds 00:06:12.462 05:58:43 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:12.462 00:06:12.462 00:06:12.462 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.462 http://cunit.sourceforge.net/ 00:06:12.462 00:06:12.462 00:06:12.462 Suite: json 00:06:12.462 Test: test_write_literal ...passed 00:06:12.462 Test: test_write_string_simple ...passed 00:06:12.462 Test: test_write_string_escapes ...passed 00:06:12.462 Test: test_write_string_utf16le ...passed 00:06:12.462 Test: test_write_number_int32 ...passed 00:06:12.462 Test: test_write_number_uint32 ...passed 00:06:12.462 Test: test_write_number_uint128 ...passed 00:06:12.462 Test: test_write_string_number_uint128 ...passed 00:06:12.462 Test: test_write_number_int64 ...passed 00:06:12.462 Test: test_write_number_uint64 ...passed 00:06:12.462 Test: test_write_number_double ...passed 00:06:12.462 Test: test_write_uuid ...passed 00:06:12.462 Test: test_write_array ...passed 00:06:12.462 Test: test_write_object ...passed 00:06:12.462 Test: test_write_nesting ...passed 00:06:12.462 Test: test_write_val ...passed 00:06:12.462 00:06:12.462 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.462 suites 1 1 n/a 0 0 00:06:12.462 tests 16 16 16 0 0 00:06:12.462 asserts 918 918 918 0 n/a 00:06:12.462 00:06:12.462 Elapsed time = 0.005 seconds 00:06:12.462 05:58:43 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:12.462 00:06:12.462 00:06:12.462 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.462 http://cunit.sourceforge.net/ 00:06:12.462 00:06:12.462 00:06:12.462 Suite: jsonrpc 00:06:12.462 Test: test_parse_request ...passed 00:06:12.462 Test: test_parse_request_streaming ...passed 00:06:12.462 00:06:12.462 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.462 suites 1 1 n/a 0 0 00:06:12.462 tests 2 2 2 0 0 00:06:12.462 asserts 289 289 289 0 n/a 00:06:12.462 00:06:12.462 Elapsed time = 0.005 seconds 00:06:12.815 00:06:12.815 real 0m0.165s 00:06:12.815 user 0m0.072s 00:06:12.815 sys 0m0.093s 00:06:12.815 05:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.815 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.815 ************************************ 00:06:12.815 END TEST unittest_json 00:06:12.815 ************************************ 00:06:12.815 05:58:43 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:12.815 05:58:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.815 05:58:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.815 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.815 ************************************ 00:06:12.815 START TEST unittest_rpc 00:06:12.815 ************************************ 00:06:12.815 05:58:43 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:06:12.815 05:58:43 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:12.815 00:06:12.815 00:06:12.815 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.815 http://cunit.sourceforge.net/ 00:06:12.815 00:06:12.815 00:06:12.815 Suite: rpc 00:06:12.815 Test: test_jsonrpc_handler ...passed 00:06:12.815 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:12.815 Test: test_rpc_get_methods ...[2024-06-11 05:58:43.211775] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:12.815 passed 00:06:12.815 Test: test_rpc_spdk_get_version ...passed 00:06:12.815 Test: test_spdk_rpc_listen_close ...passed 00:06:12.815 00:06:12.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.815 suites 1 1 n/a 0 0 00:06:12.815 tests 5 5 5 0 0 00:06:12.815 asserts 20 20 20 0 n/a 00:06:12.815 00:06:12.815 Elapsed time = 0.000 seconds 00:06:12.815 00:06:12.815 real 0m0.038s 00:06:12.815 user 0m0.026s 00:06:12.815 sys 0m0.012s 00:06:12.815 05:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.815 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.815 ************************************ 00:06:12.815 END TEST unittest_rpc 00:06:12.815 ************************************ 00:06:12.815 05:58:43 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:12.815 05:58:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.815 05:58:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.815 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.815 ************************************ 00:06:12.815 START TEST unittest_notify 00:06:12.815 ************************************ 00:06:12.815 05:58:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:12.815 00:06:12.815 00:06:12.815 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.815 http://cunit.sourceforge.net/ 00:06:12.815 00:06:12.815 00:06:12.815 Suite: app_suite 00:06:12.815 Test: notify ...passed 00:06:12.815 00:06:12.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.815 suites 1 1 n/a 0 0 00:06:12.815 tests 1 1 1 0 0 00:06:12.815 asserts 13 13 13 0 n/a 00:06:12.815 00:06:12.815 Elapsed time = 0.000 seconds 00:06:12.815 00:06:12.815 real 0m0.041s 00:06:12.815 user 0m0.029s 00:06:12.815 sys 0m0.013s 00:06:12.815 05:58:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.815 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.815 ************************************ 00:06:12.815 END TEST unittest_notify 00:06:12.815 ************************************ 00:06:12.815 05:58:43 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:12.815 05:58:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.815 05:58:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.815 05:58:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.098 ************************************ 00:06:13.098 START TEST unittest_nvme 00:06:13.098 ************************************ 00:06:13.098 05:58:43 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:06:13.098 05:58:43 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:13.098 00:06:13.098 00:06:13.098 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.098 http://cunit.sourceforge.net/ 00:06:13.098 00:06:13.098 00:06:13.098 Suite: nvme 00:06:13.098 Test: test_opc_data_transfer ...passed 00:06:13.098 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:13.098 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:13.098 Test: test_trid_parse_and_compare ...[2024-06-11 05:58:43.420656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:13.098 [2024-06-11 05:58:43.421069] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:13.098 [2024-06-11 05:58:43.421201] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:13.098 [2024-06-11 05:58:43.421255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:13.098 [2024-06-11 05:58:43.421306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:13.098 [2024-06-11 05:58:43.421424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:13.098 passed 00:06:13.098 Test: test_trid_trtype_str ...passed 00:06:13.098 Test: test_trid_adrfam_str ...passed 00:06:13.098 Test: test_nvme_ctrlr_probe ...[2024-06-11 05:58:43.421701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:13.098 passed 00:06:13.098 Test: test_spdk_nvme_probe ...[2024-06-11 05:58:43.421828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:13.098 [2024-06-11 05:58:43.421874] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:13.098 [2024-06-11 05:58:43.422000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:13.098 passed 00:06:13.098 Test: test_spdk_nvme_connect ...[2024-06-11 05:58:43.422058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:13.098 [2024-06-11 05:58:43.422177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:13.098 [2024-06-11 05:58:43.422577] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:13.098 passed 00:06:13.098 Test: test_nvme_ctrlr_probe_internal ...[2024-06-11 05:58:43.422656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:13.098 [2024-06-11 05:58:43.422820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:13.098 passed 00:06:13.098 Test: test_nvme_init_controllers ...[2024-06-11 05:58:43.422885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:13.098 [2024-06-11 05:58:43.422993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:13.098 passed 00:06:13.098 Test: test_nvme_driver_init ...[2024-06-11 05:58:43.423131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:13.098 [2024-06-11 05:58:43.423188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:13.098 [2024-06-11 05:58:43.532189] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:13.098 [2024-06-11 05:58:43.532447] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:13.098 passed 00:06:13.098 Test: test_spdk_nvme_detach ...passed 00:06:13.098 Test: test_nvme_completion_poll_cb ...passed 00:06:13.098 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:13.098 Test: test_nvme_allocate_request_null ...passed 00:06:13.098 Test: test_nvme_allocate_request ...passed 00:06:13.098 Test: test_nvme_free_request ...passed 00:06:13.098 Test: test_nvme_allocate_request_user_copy ...passed 00:06:13.098 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:13.098 Test: test_nvme_request_check_timeout ...passed 00:06:13.098 Test: test_nvme_wait_for_completion ...passed 00:06:13.098 Test: test_spdk_nvme_parse_func ...passed 00:06:13.098 Test: test_spdk_nvme_detach_async ...passed 00:06:13.098 Test: test_nvme_parse_addr ...[2024-06-11 05:58:43.533595] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:13.098 passed 00:06:13.098 00:06:13.098 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.098 suites 1 1 n/a 0 0 00:06:13.098 tests 25 25 25 0 0 00:06:13.098 asserts 326 326 326 0 n/a 00:06:13.098 00:06:13.098 Elapsed time = 0.007 seconds 00:06:13.098 05:58:43 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:13.098 00:06:13.098 00:06:13.098 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.098 http://cunit.sourceforge.net/ 00:06:13.098 00:06:13.098 00:06:13.098 Suite: nvme_ctrlr 00:06:13.098 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-06-11 05:58:43.578248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.098 passed 00:06:13.098 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-06-11 05:58:43.580134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.098 passed 00:06:13.098 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-06-11 05:58:43.581397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.098 passed 00:06:13.098 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-06-11 05:58:43.582626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.098 passed 00:06:13.098 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-06-11 05:58:43.583891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.098 [2024-06-11 05:58:43.585053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 05:58:43.586263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 05:58:43.587427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:13.098 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-06-11 05:58:43.589764] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.099 [2024-06-11 05:58:43.591989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 05:58:43.593136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:13.099 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-06-11 05:58:43.595445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.099 [2024-06-11 05:58:43.596630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-06-11 05:58:43.598970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:13.099 Test: test_nvme_ctrlr_init_delay ...[2024-06-11 05:58:43.601451] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.099 passed 00:06:13.099 Test: test_alloc_io_qpair_rr_1 ...[2024-06-11 05:58:43.602773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.099 [2024-06-11 05:58:43.602938] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:13.099 [2024-06-11 05:58:43.603188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:13.099 passed 00:06:13.099 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-06-11 05:58:43.603297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:13.099 [2024-06-11 05:58:43.603396] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:13.099 passed 00:06:13.099 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:13.099 Test: test_alloc_io_qpair_wrr_1 ...[2024-06-11 05:58:43.603592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.099 passed 00:06:13.099 Test: test_alloc_io_qpair_wrr_2 ...[2024-06-11 05:58:43.603829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.099 [2024-06-11 05:58:43.603983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:13.099 passed 00:06:13.099 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-06-11 05:58:43.604347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:13.099 [2024-06-11 05:58:43.604588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:13.099 [2024-06-11 05:58:43.604739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:13.099 [2024-06-11 05:58:43.604871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:13.099 passed 00:06:13.099 Test: test_nvme_ctrlr_fail ...passed 00:06:13.099 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:13.099 Test: test_nvme_ctrlr_set_supported_features ...[2024-06-11 05:58:43.604974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:13.099 passed 00:06:13.099 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:13.099 Test: test_nvme_ctrlr_test_active_ns ...[2024-06-11 05:58:43.605346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:13.489 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:13.489 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:13.489 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-06-11 05:58:43.908892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-06-11 05:58:43.915749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-06-11 05:58:43.916947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 [2024-06-11 05:58:43.917010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:13.489 passed 00:06:13.489 Test: test_alloc_io_qpair_fail ...[2024-06-11 05:58:43.918128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:13.489 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-06-11 05:58:43.918215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_set_state ...passed 00:06:13.489 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-06-11 05:58:43.918363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:13.489 [2024-06-11 05:58:43.918416] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-06-11 05:58:43.942264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_ns_mgmt ...[2024-06-11 05:58:43.994918] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_reset ...[2024-06-11 05:58:43.996602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_aer_callback ...[2024-06-11 05:58:43.997022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-06-11 05:58:43.998445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:13.489 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:13.489 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-06-11 05:58:44.000278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:13.489 Test: test_nvme_ctrlr_ana_resize ...[2024-06-11 05:58:44.001613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.489 passed 00:06:13.489 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:13.489 Test: test_nvme_transport_ctrlr_ready ...[2024-06-11 05:58:44.003188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:13.490 [2024-06-11 05:58:44.003268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:13.490 passed 00:06:13.490 Test: test_nvme_ctrlr_disable ...[2024-06-11 05:58:44.003323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:13.490 passed 00:06:13.490 00:06:13.490 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.490 suites 1 1 n/a 0 0 00:06:13.490 tests 43 43 43 0 0 00:06:13.490 asserts 10418 10418 10418 0 n/a 00:06:13.490 00:06:13.490 Elapsed time = 0.385 seconds 00:06:13.490 05:58:44 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:13.490 00:06:13.490 00:06:13.490 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.490 http://cunit.sourceforge.net/ 00:06:13.490 00:06:13.490 00:06:13.490 Suite: nvme_ctrlr_cmd 00:06:13.490 Test: test_get_log_pages ...passed 00:06:13.490 Test: test_set_feature_cmd ...passed 00:06:13.490 Test: test_set_feature_ns_cmd ...passed 00:06:13.490 Test: test_get_feature_cmd ...passed 00:06:13.490 Test: test_get_feature_ns_cmd ...passed 00:06:13.490 Test: test_abort_cmd ...passed 00:06:13.490 Test: test_set_host_id_cmds ...[2024-06-11 05:58:44.069232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:13.490 passed 00:06:13.490 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:13.490 Test: test_io_raw_cmd ...passed 00:06:13.490 Test: test_io_raw_cmd_with_md ...passed 00:06:13.490 Test: test_namespace_attach ...passed 00:06:13.490 Test: test_namespace_detach ...passed 00:06:13.490 Test: test_namespace_create ...passed 00:06:13.490 Test: test_namespace_delete ...passed 00:06:13.490 Test: test_doorbell_buffer_config ...passed 00:06:13.490 Test: test_format_nvme ...passed 00:06:13.490 Test: test_fw_commit ...passed 00:06:13.490 Test: test_fw_image_download ...passed 00:06:13.490 Test: test_sanitize ...passed 00:06:13.490 Test: test_directive ...passed 00:06:13.490 Test: test_nvme_request_add_abort ...passed 00:06:13.490 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:13.490 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:13.490 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:13.490 00:06:13.490 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.490 suites 1 1 n/a 0 0 00:06:13.490 tests 24 24 24 0 0 00:06:13.490 asserts 198 198 198 0 n/a 00:06:13.490 00:06:13.490 Elapsed time = 0.002 seconds 00:06:13.827 05:58:44 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:13.827 00:06:13.827 00:06:13.827 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.827 http://cunit.sourceforge.net/ 00:06:13.827 00:06:13.827 00:06:13.827 Suite: nvme_ctrlr_cmd 00:06:13.827 Test: test_geometry_cmd ...passed 00:06:13.827 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:13.827 00:06:13.827 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.827 suites 1 1 n/a 0 0 00:06:13.827 tests 2 2 2 0 0 00:06:13.827 asserts 7 7 7 0 n/a 00:06:13.827 00:06:13.827 Elapsed time = 0.000 seconds 00:06:13.827 05:58:44 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:13.827 00:06:13.827 00:06:13.827 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.827 http://cunit.sourceforge.net/ 00:06:13.827 00:06:13.827 00:06:13.827 Suite: nvme 00:06:13.827 Test: test_nvme_ns_construct ...passed 00:06:13.827 Test: test_nvme_ns_uuid ...passed 00:06:13.827 Test: test_nvme_ns_csi ...passed 00:06:13.827 Test: test_nvme_ns_data ...passed 00:06:13.827 Test: test_nvme_ns_set_identify_data ...passed 00:06:13.827 Test: test_spdk_nvme_ns_get_values ...passed 00:06:13.827 Test: test_spdk_nvme_ns_is_active ...passed 00:06:13.827 Test: spdk_nvme_ns_supports ...passed 00:06:13.827 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:13.827 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:13.827 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:13.827 Test: test_nvme_ns_find_id_desc ...passed 00:06:13.827 00:06:13.827 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.827 suites 1 1 n/a 0 0 00:06:13.827 tests 12 12 12 0 0 00:06:13.827 asserts 83 83 83 0 n/a 00:06:13.827 00:06:13.827 Elapsed time = 0.000 seconds 00:06:13.827 05:58:44 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:13.827 00:06:13.827 00:06:13.827 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.827 http://cunit.sourceforge.net/ 00:06:13.827 00:06:13.827 00:06:13.827 Suite: nvme_ns_cmd 00:06:13.827 Test: split_test ...passed 00:06:13.827 Test: split_test2 ...passed 00:06:13.827 Test: split_test3 ...passed 00:06:13.827 Test: split_test4 ...passed 00:06:13.827 Test: test_nvme_ns_cmd_flush ...passed 00:06:13.827 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:13.827 Test: test_nvme_ns_cmd_copy ...passed 00:06:13.827 Test: test_io_flags ...[2024-06-11 05:58:44.169612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:13.827 passed 00:06:13.827 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:13.827 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:13.827 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:13.827 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:13.827 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:13.827 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:13.827 Test: test_cmd_child_request ...passed 00:06:13.827 Test: test_nvme_ns_cmd_readv ...passed 00:06:13.827 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:13.827 Test: test_nvme_ns_cmd_writev ...[2024-06-11 05:58:44.171893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:13.827 passed 00:06:13.827 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:13.827 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:13.827 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:13.827 Test: test_nvme_ns_cmd_comparev ...passed 00:06:13.827 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:13.827 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:13.827 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:13.827 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:13.827 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:13.827 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-06-11 05:58:44.175017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:13.827 passed 00:06:13.827 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-06-11 05:58:44.175397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:13.827 passed 00:06:13.827 Test: test_nvme_ns_cmd_verify ...passed 00:06:13.827 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:13.827 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:13.827 00:06:13.827 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.827 suites 1 1 n/a 0 0 00:06:13.827 tests 32 32 32 0 0 00:06:13.827 asserts 550 550 550 0 n/a 00:06:13.827 00:06:13.827 Elapsed time = 0.009 seconds 00:06:13.827 05:58:44 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:13.827 00:06:13.827 00:06:13.827 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.827 http://cunit.sourceforge.net/ 00:06:13.827 00:06:13.827 00:06:13.827 Suite: nvme_ns_cmd 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:13.827 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:13.827 00:06:13.827 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.827 suites 1 1 n/a 0 0 00:06:13.827 tests 12 12 12 0 0 00:06:13.827 asserts 123 123 123 0 n/a 00:06:13.827 00:06:13.827 Elapsed time = 0.001 seconds 00:06:13.827 05:58:44 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:13.827 00:06:13.827 00:06:13.827 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.827 http://cunit.sourceforge.net/ 00:06:13.827 00:06:13.827 00:06:13.827 Suite: nvme_qpair 00:06:13.827 Test: test3 ...passed 00:06:13.827 Test: test_ctrlr_failed ...passed 00:06:13.827 Test: struct_packing ...passed 00:06:13.827 Test: test_nvme_qpair_process_completions ...[2024-06-11 05:58:44.237936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:13.828 [2024-06-11 05:58:44.238288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:13.828 [2024-06-11 05:58:44.238360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:13.828 passed 00:06:13.828 Test: test_nvme_completion_is_retry ...passed 00:06:13.828 Test: test_get_status_string ...passed 00:06:13.828 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-06-11 05:58:44.238450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:13.828 passed 00:06:13.828 Test: test_nvme_qpair_submit_request ...passed 00:06:13.828 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:13.828 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:13.828 Test: test_nvme_qpair_init_deinit ...passed 00:06:13.828 Test: test_nvme_get_sgl_print_info ...passed[2024-06-11 05:58:44.238827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:13.828 00:06:13.828 00:06:13.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.828 suites 1 1 n/a 0 0 00:06:13.828 tests 12 12 12 0 0 00:06:13.828 asserts 154 154 154 0 n/a 00:06:13.828 00:06:13.828 Elapsed time = 0.001 seconds 00:06:13.828 05:58:44 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:13.828 00:06:13.828 00:06:13.828 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.828 http://cunit.sourceforge.net/ 00:06:13.828 00:06:13.828 00:06:13.828 Suite: nvme_pcie 00:06:13.828 Test: test_prp_list_append ...[2024-06-11 05:58:44.268680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:13.828 [2024-06-11 05:58:44.269048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:13.828 [2024-06-11 05:58:44.269092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:13.828 [2024-06-11 05:58:44.269315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:13.828 passed 00:06:13.828 Test: test_nvme_pcie_hotplug_monitor ...[2024-06-11 05:58:44.269394] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:13.828 passed 00:06:13.828 Test: test_shadow_doorbell_update ...passed 00:06:13.828 Test: test_build_contig_hw_sgl_request ...passed 00:06:13.828 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:13.828 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:13.828 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:13.828 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:13.828 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:13.828 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:13.828 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-06-11 05:58:44.269569] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:13.828 [2024-06-11 05:58:44.269659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:13.828 passed 00:06:13.828 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:13.828 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-06-11 05:58:44.269738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:13.828 passed 00:06:13.828 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:06:13.828 00:06:13.828 [2024-06-11 05:58:44.269824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:13.828 [2024-06-11 05:58:44.269870] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:13.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.828 suites 1 1 n/a 0 0 00:06:13.828 tests 14 14 14 0 0 00:06:13.828 asserts 235 235 235 0 n/a 00:06:13.828 00:06:13.828 Elapsed time = 0.001 seconds 00:06:13.828 05:58:44 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:13.828 00:06:13.828 00:06:13.828 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.828 http://cunit.sourceforge.net/ 00:06:13.828 00:06:13.828 00:06:13.828 Suite: nvme_ns_cmd 00:06:13.828 Test: nvme_poll_group_create_test ...passed 00:06:13.828 Test: nvme_poll_group_add_remove_test ...passed 00:06:13.828 Test: nvme_poll_group_process_completions ...passed 00:06:13.828 Test: nvme_poll_group_destroy_test ...passed 00:06:13.828 Test: nvme_poll_group_get_free_stats ...passed 00:06:13.828 00:06:13.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.828 suites 1 1 n/a 0 0 00:06:13.828 tests 5 5 5 0 0 00:06:13.828 asserts 75 75 75 0 n/a 00:06:13.828 00:06:13.828 Elapsed time = 0.001 seconds 00:06:13.828 05:58:44 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:13.828 00:06:13.828 00:06:13.828 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.828 http://cunit.sourceforge.net/ 00:06:13.828 00:06:13.828 00:06:13.828 Suite: nvme_quirks 00:06:13.828 Test: test_nvme_quirks_striping ...passed 00:06:13.828 00:06:13.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.828 suites 1 1 n/a 0 0 00:06:13.828 tests 1 1 1 0 0 00:06:13.828 asserts 5 5 5 0 n/a 00:06:13.828 00:06:13.828 Elapsed time = 0.000 seconds 00:06:13.828 05:58:44 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:13.828 00:06:13.828 00:06:13.828 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.828 http://cunit.sourceforge.net/ 00:06:13.828 00:06:13.828 00:06:13.828 Suite: nvme_tcp 00:06:13.828 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:13.828 Test: test_nvme_tcp_build_iovs ...passed 00:06:13.828 Test: test_nvme_tcp_build_sgl_request ...[2024-06-11 05:58:44.378761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff7d182290, and the iovcnt=16, remaining_size=28672 00:06:13.828 passed 00:06:13.828 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:13.828 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:13.828 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:13.828 Test: test_nvme_tcp_req_get ...passed 00:06:13.828 Test: test_nvme_tcp_req_init ...passed 00:06:13.828 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:13.828 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:13.828 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-06-11 05:58:44.379502] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183fb0 is same with the state(6) to be set 00:06:13.828 passed 00:06:13.828 Test: test_nvme_tcp_alloc_reqs ...passed 00:06:13.828 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-06-11 05:58:44.379909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183140 is same with the state(5) to be set 00:06:13.828 passed 00:06:13.828 Test: test_nvme_tcp_pdu_ch_handle ...[2024-06-11 05:58:44.379988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff7d183c70 00:06:13.828 [2024-06-11 05:58:44.380057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:13.828 [2024-06-11 05:58:44.380176] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.380249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:13.828 [2024-06-11 05:58:44.380366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.380431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:13.828 [2024-06-11 05:58:44.380478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.380536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.380593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.380673] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.380726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 passed 00:06:13.828 Test: test_nvme_tcp_qpair_connect_sock ...[2024-06-11 05:58:44.380790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183600 is same with the state(5) to be set 00:06:13.828 [2024-06-11 05:58:44.381049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:13.828 [2024-06-11 05:58:44.381123] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:13.828 [2024-06-11 05:58:44.381440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:13.828 passed 00:06:13.828 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:13.828 Test: test_nvme_tcp_c2h_payload_handle ...[2024-06-11 05:58:44.381590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff7d1837b0): PDU Sequence Error 00:06:13.828 passed 00:06:13.828 Test: test_nvme_tcp_icresp_handle ...[2024-06-11 05:58:44.381738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:13.828 [2024-06-11 05:58:44.381790] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:13.829 [2024-06-11 05:58:44.381844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183150 is same with the state(5) to be set 00:06:13.829 [2024-06-11 05:58:44.381907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:13.829 [2024-06-11 05:58:44.381960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183150 is same with the state(5) to be set 00:06:13.829 passed 00:06:13.829 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:13.829 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-06-11 05:58:44.382035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d183150 is same with the state(0) to be set 00:06:13.829 [2024-06-11 05:58:44.382107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff7d183c70): PDU Sequence Error 00:06:13.829 [2024-06-11 05:58:44.382216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff7d182430 00:06:13.829 passed 00:06:13.829 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:13.829 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-06-11 05:58:44.382402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff7d181ab0, errno=0, rc=0 00:06:13.829 [2024-06-11 05:58:44.382467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d181ab0 is same with the state(5) to be set 00:06:13.829 [2024-06-11 05:58:44.382561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff7d181ab0 is same with the state(5) to be set 00:06:13.829 [2024-06-11 05:58:44.382622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff7d181ab0 (0): Success 00:06:13.829 passed 00:06:13.829 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-06-11 05:58:44.382686] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff7d181ab0 (0): Success 00:06:14.111 [2024-06-11 05:58:44.546902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:14.111 [2024-06-11 05:58:44.547049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:14.111 passed 00:06:14.111 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:14.111 Test: test_nvme_tcp_poll_group_get_stats ...[2024-06-11 05:58:44.547344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.111 [2024-06-11 05:58:44.547398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.111 passed 00:06:14.111 Test: test_nvme_tcp_ctrlr_construct ...[2024-06-11 05:58:44.547672] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:14.111 [2024-06-11 05:58:44.547740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:14.111 [2024-06-11 05:58:44.547881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:14.111 [2024-06-11 05:58:44.547980] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:14.111 [2024-06-11 05:58:44.548141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:14.111 passed 00:06:14.111 Test: test_nvme_tcp_qpair_submit_request ...[2024-06-11 05:58:44.548240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:14.111 [2024-06-11 05:58:44.548384] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:14.111 [2024-06-11 05:58:44.548444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:14.111 passed 00:06:14.111 00:06:14.111 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.111 suites 1 1 n/a 0 0 00:06:14.111 tests 27 27 27 0 0 00:06:14.111 asserts 624 624 624 0 n/a 00:06:14.111 00:06:14.111 Elapsed time = 0.170 seconds 00:06:14.111 05:58:44 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:14.111 00:06:14.111 00:06:14.111 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.111 http://cunit.sourceforge.net/ 00:06:14.111 00:06:14.111 00:06:14.111 Suite: nvme_transport 00:06:14.111 Test: test_nvme_get_transport ...passed 00:06:14.111 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:14.111 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:14.111 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:14.111 Test: test_ctrlr_get_memory_domains ...passed 00:06:14.111 00:06:14.111 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.111 suites 1 1 n/a 0 0 00:06:14.111 tests 5 5 5 0 0 00:06:14.111 asserts 28 28 28 0 n/a 00:06:14.111 00:06:14.111 Elapsed time = 0.000 seconds 00:06:14.111 05:58:44 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:14.111 00:06:14.111 00:06:14.111 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.111 http://cunit.sourceforge.net/ 00:06:14.111 00:06:14.111 00:06:14.111 Suite: nvme_io_msg 00:06:14.111 Test: test_nvme_io_msg_send ...passed 00:06:14.111 Test: test_nvme_io_msg_process ...passed 00:06:14.111 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:14.111 00:06:14.111 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.111 suites 1 1 n/a 0 0 00:06:14.111 tests 3 3 3 0 0 00:06:14.111 asserts 56 56 56 0 n/a 00:06:14.111 00:06:14.111 Elapsed time = 0.000 seconds 00:06:14.111 05:58:44 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:14.111 00:06:14.111 00:06:14.111 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.111 http://cunit.sourceforge.net/ 00:06:14.111 00:06:14.111 00:06:14.111 Suite: nvme_pcie_common 00:06:14.112 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-06-11 05:58:44.673283] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:14.112 passed 00:06:14.112 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:14.112 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:14.112 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-06-11 05:58:44.673877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:14.112 [2024-06-11 05:58:44.673978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:14.112 [2024-06-11 05:58:44.674014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:14.112 passed 00:06:14.112 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:06:14.112 Test: test_nvme_pcie_poll_group_get_stats ...[2024-06-11 05:58:44.674398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.112 [2024-06-11 05:58:44.674442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:14.112 passed 00:06:14.112 00:06:14.112 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.112 suites 1 1 n/a 0 0 00:06:14.112 tests 6 6 6 0 0 00:06:14.112 asserts 148 148 148 0 n/a 00:06:14.112 00:06:14.112 Elapsed time = 0.001 seconds 00:06:14.112 05:58:44 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:14.112 00:06:14.112 00:06:14.112 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.112 http://cunit.sourceforge.net/ 00:06:14.112 00:06:14.112 00:06:14.112 Suite: nvme_fabric 00:06:14.112 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:14.112 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:14.112 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:14.112 Test: test_nvme_fabric_discover_probe ...passed 00:06:14.112 Test: test_nvme_fabric_qpair_connect ...[2024-06-11 05:58:44.709420] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:14.112 passed 00:06:14.112 00:06:14.112 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.112 suites 1 1 n/a 0 0 00:06:14.112 tests 5 5 5 0 0 00:06:14.112 asserts 60 60 60 0 n/a 00:06:14.112 00:06:14.112 Elapsed time = 0.001 seconds 00:06:14.112 05:58:44 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:14.112 00:06:14.112 00:06:14.112 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.112 http://cunit.sourceforge.net/ 00:06:14.112 00:06:14.112 00:06:14.112 Suite: nvme_opal 00:06:14.112 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:14.112 Test: test_opal_add_short_atom_header ...[2024-06-11 05:58:44.746218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:14.112 passed 00:06:14.112 00:06:14.112 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.112 suites 1 1 n/a 0 0 00:06:14.112 tests 2 2 2 0 0 00:06:14.112 asserts 22 22 22 0 n/a 00:06:14.112 00:06:14.112 Elapsed time = 0.000 seconds 00:06:14.372 00:06:14.372 real 0m1.358s 00:06:14.372 user 0m0.630s 00:06:14.372 sys 0m0.584s 00:06:14.372 05:58:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.372 05:58:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.372 ************************************ 00:06:14.372 END TEST unittest_nvme 00:06:14.372 ************************************ 00:06:14.372 05:58:44 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:14.372 05:58:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.372 05:58:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.372 05:58:44 -- common/autotest_common.sh@10 -- # set +x 00:06:14.372 ************************************ 00:06:14.372 START TEST unittest_log 00:06:14.372 ************************************ 00:06:14.372 05:58:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:14.372 00:06:14.372 00:06:14.372 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.372 http://cunit.sourceforge.net/ 00:06:14.372 00:06:14.372 00:06:14.372 Suite: log 00:06:14.372 Test: log_test ...[2024-06-11 05:58:44.834936] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:14.372 [2024-06-11 05:58:44.835283] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:14.372 log dump test: 00:06:14.372 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:14.372 spdk dump test: 00:06:14.372 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:14.372 spdk dump test: 00:06:14.372 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:14.372 00000010 65 20 63 68 61 72 73 e chars 00:06:14.372 passed 00:06:15.310 Test: deprecation ...passed 00:06:15.310 00:06:15.310 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.310 suites 1 1 n/a 0 0 00:06:15.310 tests 2 2 2 0 0 00:06:15.310 asserts 73 73 73 0 n/a 00:06:15.310 00:06:15.310 Elapsed time = 0.001 seconds 00:06:15.310 00:06:15.310 real 0m1.041s 00:06:15.310 user 0m0.018s 00:06:15.310 sys 0m0.024s 00:06:15.310 05:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.310 ************************************ 00:06:15.310 END TEST unittest_log 00:06:15.310 05:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.310 ************************************ 00:06:15.310 05:58:45 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:15.310 05:58:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.310 05:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.310 05:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.310 ************************************ 00:06:15.310 START TEST unittest_lvol 00:06:15.310 ************************************ 00:06:15.310 05:58:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:15.310 00:06:15.310 00:06:15.310 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.310 http://cunit.sourceforge.net/ 00:06:15.310 00:06:15.310 00:06:15.310 Suite: lvol 00:06:15.310 Test: lvs_init_unload_success ...[2024-06-11 05:58:45.939902] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:15.310 passed 00:06:15.310 Test: lvs_init_destroy_success ...[2024-06-11 05:58:45.940539] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:15.310 passed 00:06:15.310 Test: lvs_init_opts_success ...passed 00:06:15.310 Test: lvs_unload_lvs_is_null_fail ...passed 00:06:15.310 Test: lvs_names ...[2024-06-11 05:58:45.940852] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:15.310 [2024-06-11 05:58:45.940937] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:15.310 [2024-06-11 05:58:45.940995] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:15.310 [2024-06-11 05:58:45.941209] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:15.310 passed 00:06:15.310 Test: lvol_create_destroy_success ...passed 00:06:15.310 Test: lvol_create_fail ...[2024-06-11 05:58:45.941883] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:15.310 [2024-06-11 05:58:45.942018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:15.310 passed 00:06:15.310 Test: lvol_destroy_fail ...[2024-06-11 05:58:45.942389] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:15.310 passed 00:06:15.310 Test: lvol_close ...[2024-06-11 05:58:45.942653] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:15.310 [2024-06-11 05:58:45.942722] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:15.310 passed 00:06:15.310 Test: lvol_resize ...passed 00:06:15.310 Test: lvol_set_read_only ...passed 00:06:15.310 Test: test_lvs_load ...[2024-06-11 05:58:45.943677] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:15.310 [2024-06-11 05:58:45.943742] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:15.310 passed 00:06:15.310 Test: lvols_load ...[2024-06-11 05:58:45.944033] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:15.310 [2024-06-11 05:58:45.944186] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:15.310 passed 00:06:15.310 Test: lvol_open ...passed 00:06:15.310 Test: lvol_snapshot ...passed 00:06:15.310 Test: lvol_snapshot_fail ...[2024-06-11 05:58:45.945085] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:15.310 passed 00:06:15.310 Test: lvol_clone ...passed 00:06:15.310 Test: lvol_clone_fail ...[2024-06-11 05:58:45.945778] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:15.310 passed 00:06:15.310 Test: lvol_iter_clones ...passed 00:06:15.310 Test: lvol_refcnt ...[2024-06-11 05:58:45.946417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 2751d6b0-1cfb-4f06-bcb2-416057d1894e because it is still open 00:06:15.310 passed 00:06:15.310 Test: lvol_names ...[2024-06-11 05:58:45.946665] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:15.310 [2024-06-11 05:58:45.946785] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:15.310 [2024-06-11 05:58:45.947020] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:15.310 passed 00:06:15.310 Test: lvol_create_thin_provisioned ...passed 00:06:15.310 Test: lvol_rename ...[2024-06-11 05:58:45.947603] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:15.310 [2024-06-11 05:58:45.947709] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:15.310 passed 00:06:15.310 Test: lvs_rename ...[2024-06-11 05:58:45.948029] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:15.310 passed 00:06:15.310 Test: lvol_inflate ...[2024-06-11 05:58:45.948296] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:15.310 passed 00:06:15.310 Test: lvol_decouple_parent ...[2024-06-11 05:58:45.948614] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:15.310 passed 00:06:15.310 Test: lvol_get_xattr ...passed 00:06:15.310 Test: lvol_esnap_reload ...passed 00:06:15.310 Test: lvol_esnap_create_bad_args ...[2024-06-11 05:58:45.949135] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:15.310 [2024-06-11 05:58:45.949194] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:15.310 [2024-06-11 05:58:45.949262] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:15.310 [2024-06-11 05:58:45.949403] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:15.310 [2024-06-11 05:58:45.949552] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:15.310 passed 00:06:15.310 Test: lvol_esnap_create_delete ...passed 00:06:15.310 Test: lvol_esnap_load_esnaps ...[2024-06-11 05:58:45.949971] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:15.310 passed 00:06:15.310 Test: lvol_esnap_missing ...[2024-06-11 05:58:45.950138] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:15.310 [2024-06-11 05:58:45.950199] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:15.310 passed 00:06:15.310 Test: lvol_esnap_hotplug ... 00:06:15.310 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:15.310 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:15.310 [2024-06-11 05:58:45.950912] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 6c171bbd-60f6-479d-a47d-bd96c1c07dcd: failed to create esnap bs_dev: error -12 00:06:15.310 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:15.310 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:15.310 [2024-06-11 05:58:45.951127] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol bad99c38-e62c-4c75-a350-bab3f11cb8db: failed to create esnap bs_dev: error -12 00:06:15.310 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:15.311 [2024-06-11 05:58:45.951275] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 29447db6-632a-4a03-8665-7dc34da6c4f6: failed to create esnap bs_dev: error -12 00:06:15.311 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:15.311 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:15.311 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:15.311 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:15.311 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:15.311 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:15.311 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:15.311 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:15.311 passed 00:06:15.311 Test: lvol_get_by ...passed 00:06:15.311 00:06:15.311 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.311 suites 1 1 n/a 0 0 00:06:15.311 tests 34 34 34 0 0 00:06:15.311 asserts 1439 1439 1439 0 n/a 00:06:15.311 00:06:15.311 Elapsed time = 0.013 seconds 00:06:15.569 00:06:15.569 real 0m0.061s 00:06:15.569 user 0m0.031s 00:06:15.569 sys 0m0.028s 00:06:15.569 05:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.569 05:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.569 ************************************ 00:06:15.569 END TEST unittest_lvol 00:06:15.569 ************************************ 00:06:15.569 05:58:46 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:15.569 05:58:46 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:15.569 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.569 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.569 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.569 ************************************ 00:06:15.569 START TEST unittest_nvme_rdma 00:06:15.569 ************************************ 00:06:15.569 05:58:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:15.570 00:06:15.570 00:06:15.570 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.570 http://cunit.sourceforge.net/ 00:06:15.570 00:06:15.570 00:06:15.570 Suite: nvme_rdma 00:06:15.570 Test: test_nvme_rdma_build_sgl_request ...[2024-06-11 05:58:46.067803] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:15.570 [2024-06-11 05:58:46.068327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:15.570 [2024-06-11 05:58:46.068556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:15.570 Test: test_nvme_rdma_build_contig_request ...[2024-06-11 05:58:46.068946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:15.570 Test: test_nvme_rdma_create_reqs ...[2024-06-11 05:58:46.069441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_create_rsps ...[2024-06-11 05:58:46.070143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-06-11 05:58:46.070508] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:15.570 [2024-06-11 05:58:46.070761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_poller_create ...passed 00:06:15.570 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-06-11 05:58:46.071215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_ctrlr_construct ...passed 00:06:15.570 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:15.570 Test: test_nvme_rdma_req_init ...passed 00:06:15.570 Test: test_nvme_rdma_validate_cm_event ...[2024-06-11 05:58:46.072210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_qpair_init ...[2024-06-11 05:58:46.072327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:15.570 Test: test_nvme_rdma_memory_domain ...[2024-06-11 05:58:46.072970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:15.570 passed 00:06:15.570 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:15.570 Test: test_rdma_get_memory_translation ...[2024-06-11 05:58:46.073412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:15.570 [2024-06-11 05:58:46.073601] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:15.570 passed 00:06:15.570 Test: test_get_rdma_qpair_from_wc ...passed 00:06:15.570 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:15.570 Test: test_nvme_rdma_poll_group_get_stats ...[2024-06-11 05:58:46.074158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:15.570 [2024-06-11 05:58:46.074346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:15.570 passed 00:06:15.570 Test: test_nvme_rdma_qpair_set_poller ...[2024-06-11 05:58:46.074652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:15.570 [2024-06-11 05:58:46.074804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:15.570 [2024-06-11 05:58:46.074956] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc7ba48840 on poll group 0x60b0000001a0 00:06:15.570 [2024-06-11 05:58:46.075129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:15.570 [2024-06-11 05:58:46.075316] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:15.570 [2024-06-11 05:58:46.075470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffc7ba48840 on poll group 0x60b0000001a0 00:06:15.570 [2024-06-11 05:58:46.075649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:15.570 passed 00:06:15.570 00:06:15.570 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.570 suites 1 1 n/a 0 0 00:06:15.570 tests 22 22 22 0 0 00:06:15.570 asserts 412 412 412 0 n/a 00:06:15.570 00:06:15.570 Elapsed time = 0.004 seconds 00:06:15.570 00:06:15.570 real 0m0.050s 00:06:15.570 user 0m0.028s 00:06:15.570 sys 0m0.017s 00:06:15.570 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.570 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.570 ************************************ 00:06:15.570 END TEST unittest_nvme_rdma 00:06:15.570 ************************************ 00:06:15.570 05:58:46 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:15.570 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.570 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.570 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.570 ************************************ 00:06:15.570 START TEST unittest_nvmf_transport 00:06:15.570 ************************************ 00:06:15.570 05:58:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:15.570 00:06:15.570 00:06:15.570 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.570 http://cunit.sourceforge.net/ 00:06:15.570 00:06:15.570 00:06:15.570 Suite: nvmf 00:06:15.570 Test: test_spdk_nvmf_transport_create ...[2024-06-11 05:58:46.182643] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:15.570 [2024-06-11 05:58:46.183316] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:15.570 [2024-06-11 05:58:46.183404] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:15.570 [2024-06-11 05:58:46.183578] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:15.570 passed 00:06:15.570 Test: test_nvmf_transport_poll_group_create ...passed 00:06:15.570 Test: test_spdk_nvmf_transport_opts_init ...[2024-06-11 05:58:46.183961] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:15.570 [2024-06-11 05:58:46.184084] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:15.570 [2024-06-11 05:58:46.184134] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:15.570 passed 00:06:15.570 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:15.570 00:06:15.570 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.570 suites 1 1 n/a 0 0 00:06:15.570 tests 4 4 4 0 0 00:06:15.570 asserts 49 49 49 0 n/a 00:06:15.570 00:06:15.570 Elapsed time = 0.002 seconds 00:06:15.570 00:06:15.570 real 0m0.057s 00:06:15.570 user 0m0.037s 00:06:15.570 sys 0m0.020s 00:06:15.570 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.570 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.570 ************************************ 00:06:15.570 END TEST unittest_nvmf_transport 00:06:15.570 ************************************ 00:06:15.830 05:58:46 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:15.830 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.830 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.830 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.830 ************************************ 00:06:15.830 START TEST unittest_rdma 00:06:15.830 ************************************ 00:06:15.830 05:58:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:15.830 00:06:15.830 00:06:15.830 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.830 http://cunit.sourceforge.net/ 00:06:15.830 00:06:15.830 00:06:15.830 Suite: rdma_common 00:06:15.830 Test: test_spdk_rdma_pd ...[2024-06-11 05:58:46.289247] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:15.830 [2024-06-11 05:58:46.289713] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:15.830 passed 00:06:15.830 00:06:15.830 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.830 suites 1 1 n/a 0 0 00:06:15.830 tests 1 1 1 0 0 00:06:15.830 asserts 31 31 31 0 n/a 00:06:15.830 00:06:15.830 Elapsed time = 0.001 seconds 00:06:15.830 00:06:15.830 real 0m0.039s 00:06:15.830 user 0m0.012s 00:06:15.830 sys 0m0.027s 00:06:15.830 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.830 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.830 ************************************ 00:06:15.830 END TEST unittest_rdma 00:06:15.830 ************************************ 00:06:15.830 05:58:46 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:15.830 05:58:46 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:15.830 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.830 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.830 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.830 ************************************ 00:06:15.830 START TEST unittest_nvme_cuse 00:06:15.830 ************************************ 00:06:15.830 05:58:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:15.830 00:06:15.830 00:06:15.830 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.830 http://cunit.sourceforge.net/ 00:06:15.830 00:06:15.830 00:06:15.830 Suite: nvme_cuse 00:06:15.830 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:15.830 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:15.830 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:15.830 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:15.830 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:15.830 Test: test_cuse_nvme_submit_io ...[2024-06-11 05:58:46.394564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:15.830 passed 00:06:15.830 Test: test_cuse_nvme_reset ...[2024-06-11 05:58:46.394925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:15.830 passed 00:06:15.830 Test: test_nvme_cuse_stop ...passed 00:06:15.830 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:15.830 00:06:15.830 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.830 suites 1 1 n/a 0 0 00:06:15.830 tests 9 9 9 0 0 00:06:15.830 asserts 121 121 121 0 n/a 00:06:15.830 00:06:15.830 Elapsed time = 0.002 seconds 00:06:15.830 00:06:15.830 real 0m0.043s 00:06:15.830 user 0m0.020s 00:06:15.830 sys 0m0.024s 00:06:15.830 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.830 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:15.830 ************************************ 00:06:15.830 END TEST unittest_nvme_cuse 00:06:15.830 ************************************ 00:06:15.830 05:58:46 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:15.830 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.830 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.830 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.090 ************************************ 00:06:16.090 START TEST unittest_nvmf 00:06:16.090 ************************************ 00:06:16.090 05:58:46 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:06:16.090 05:58:46 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:16.090 00:06:16.090 00:06:16.090 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.090 http://cunit.sourceforge.net/ 00:06:16.090 00:06:16.090 00:06:16.090 Suite: nvmf 00:06:16.090 Test: test_get_log_page ...[2024-06-11 05:58:46.502448] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:16.090 passed 00:06:16.090 Test: test_process_fabrics_cmd ...passed 00:06:16.090 Test: test_connect ...[2024-06-11 05:58:46.504604] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:16.090 [2024-06-11 05:58:46.504978] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:16.091 [2024-06-11 05:58:46.505185] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:16.091 [2024-06-11 05:58:46.505372] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:16.091 [2024-06-11 05:58:46.505625] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:16.091 [2024-06-11 05:58:46.505778] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:16.091 [2024-06-11 05:58:46.506015] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:16.091 [2024-06-11 05:58:46.506174] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:16.091 [2024-06-11 05:58:46.506445] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:16.091 [2024-06-11 05:58:46.506661] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:16.091 [2024-06-11 05:58:46.507127] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:16.091 [2024-06-11 05:58:46.507381] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:16.091 [2024-06-11 05:58:46.507616] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:16.091 [2024-06-11 05:58:46.507821] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:16.091 [2024-06-11 05:58:46.508101] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:16.091 [2024-06-11 05:58:46.508399] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:16.091 passed 00:06:16.091 Test: test_get_ns_id_desc_list ...passed 00:06:16.091 Test: test_identify_ns ...[2024-06-11 05:58:46.509234] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.091 [2024-06-11 05:58:46.509527] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:16.091 [2024-06-11 05:58:46.509732] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:16.091 passed 00:06:16.091 Test: test_identify_ns_iocs_specific ...[2024-06-11 05:58:46.510110] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.091 [2024-06-11 05:58:46.510487] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:16.091 passed 00:06:16.091 Test: test_reservation_write_exclusive ...passed 00:06:16.091 Test: test_reservation_exclusive_access ...passed 00:06:16.091 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:16.091 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:16.091 Test: test_reservation_notification_log_page ...passed 00:06:16.091 Test: test_get_dif_ctx ...passed 00:06:16.091 Test: test_set_get_features ...[2024-06-11 05:58:46.512242] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:16.091 [2024-06-11 05:58:46.512441] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:16.091 [2024-06-11 05:58:46.512615] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:16.091 [2024-06-11 05:58:46.512811] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:16.091 passed 00:06:16.091 Test: test_identify_ctrlr ...passed 00:06:16.091 Test: test_identify_ctrlr_iocs_specific ...passed 00:06:16.091 Test: test_custom_admin_cmd ...passed 00:06:16.091 Test: test_fused_compare_and_write ...[2024-06-11 05:58:46.513983] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:16.091 [2024-06-11 05:58:46.514151] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:16.091 [2024-06-11 05:58:46.514305] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:16.091 passed 00:06:16.091 Test: test_multi_async_event_reqs ...passed 00:06:16.091 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:16.091 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:16.091 Test: test_multi_async_events ...passed 00:06:16.091 Test: test_rae ...passed 00:06:16.091 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:16.091 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:16.091 Test: test_spdk_nvmf_request_zcopy_start ...[2024-06-11 05:58:46.516043] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:16.091 passed 00:06:16.091 Test: test_zcopy_read ...passed 00:06:16.091 Test: test_zcopy_write ...passed 00:06:16.091 Test: test_nvmf_property_set ...passed 00:06:16.091 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-06-11 05:58:46.516934] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:16.091 [2024-06-11 05:58:46.517125] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:16.091 passed 00:06:16.091 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-06-11 05:58:46.517407] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:16.091 [2024-06-11 05:58:46.517576] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:16.091 [2024-06-11 05:58:46.517716] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:16.091 passed 00:06:16.091 00:06:16.091 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.091 suites 1 1 n/a 0 0 00:06:16.091 tests 30 30 30 0 0 00:06:16.091 asserts 885 885 885 0 n/a 00:06:16.091 00:06:16.091 Elapsed time = 0.009 seconds 00:06:16.091 05:58:46 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:16.091 00:06:16.091 00:06:16.091 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.091 http://cunit.sourceforge.net/ 00:06:16.091 00:06:16.091 00:06:16.091 Suite: nvmf 00:06:16.091 Test: test_get_rw_params ...passed 00:06:16.091 Test: test_lba_in_range ...passed 00:06:16.091 Test: test_get_dif_ctx ...passed 00:06:16.091 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:16.091 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-06-11 05:58:46.554519] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:16.091 [2024-06-11 05:58:46.554861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:16.091 [2024-06-11 05:58:46.554988] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:16.091 passed 00:06:16.091 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-06-11 05:58:46.555060] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:16.091 [2024-06-11 05:58:46.555181] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:16.091 passed 00:06:16.091 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-06-11 05:58:46.555324] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:16.091 passed 00:06:16.091 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...[2024-06-11 05:58:46.555377] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:16.091 [2024-06-11 05:58:46.555465] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:16.091 [2024-06-11 05:58:46.555502] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:16.091 passed 00:06:16.091 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:16.091 00:06:16.091 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.091 suites 1 1 n/a 0 0 00:06:16.092 tests 9 9 9 0 0 00:06:16.092 asserts 157 157 157 0 n/a 00:06:16.092 00:06:16.092 Elapsed time = 0.001 seconds 00:06:16.092 05:58:46 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:16.092 00:06:16.092 00:06:16.092 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.092 http://cunit.sourceforge.net/ 00:06:16.092 00:06:16.092 00:06:16.092 Suite: nvmf 00:06:16.092 Test: test_discovery_log ...passed 00:06:16.092 Test: test_discovery_log_with_filters ...passed 00:06:16.092 00:06:16.092 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.092 suites 1 1 n/a 0 0 00:06:16.092 tests 2 2 2 0 0 00:06:16.092 asserts 238 238 238 0 n/a 00:06:16.092 00:06:16.092 Elapsed time = 0.003 seconds 00:06:16.092 05:58:46 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:16.092 00:06:16.092 00:06:16.092 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.092 http://cunit.sourceforge.net/ 00:06:16.092 00:06:16.092 00:06:16.092 Suite: nvmf 00:06:16.092 Test: nvmf_test_create_subsystem ...[2024-06-11 05:58:46.645999] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:16.092 [2024-06-11 05:58:46.646393] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:16.092 [2024-06-11 05:58:46.646514] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:16.092 [2024-06-11 05:58:46.646565] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:16.092 [2024-06-11 05:58:46.646600] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:16.092 [2024-06-11 05:58:46.646653] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:16.092 [2024-06-11 05:58:46.646783] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:16.092 [2024-06-11 05:58:46.646991] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:16.092 [2024-06-11 05:58:46.647123] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:16.092 passed 00:06:16.092 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-06-11 05:58:46.647192] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:16.092 [2024-06-11 05:58:46.647233] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:16.092 [2024-06-11 05:58:46.647474] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:16.092 passed 00:06:16.092 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:16.092 Test: test_reservation_register ...[2024-06-11 05:58:46.647604] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:16.092 [2024-06-11 05:58:46.647888] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 [2024-06-11 05:58:46.648035] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:16.092 passed 00:06:16.092 Test: test_reservation_register_with_ptpl ...passed 00:06:16.092 Test: test_reservation_acquire_preempt_1 ...[2024-06-11 05:58:46.649166] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:16.092 Test: test_reservation_release ...[2024-06-11 05:58:46.651001] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_reservation_unregister_notification ...[2024-06-11 05:58:46.651304] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_reservation_release_notification ...[2024-06-11 05:58:46.651631] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_reservation_release_notification_write_exclusive ...[2024-06-11 05:58:46.651900] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_reservation_clear_notification ...[2024-06-11 05:58:46.652191] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_reservation_preempt_notification ...[2024-06-11 05:58:46.652462] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:16.092 passed 00:06:16.092 Test: test_spdk_nvmf_ns_event ...passed 00:06:16.092 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:16.092 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:16.092 Test: test_spdk_nvmf_subsystem_add_host ...[2024-06-11 05:58:46.653415] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:16.092 [2024-06-11 05:58:46.653541] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:16.092 passed 00:06:16.092 Test: test_nvmf_ns_reservation_report ...passed 00:06:16.092 Test: test_nvmf_nqn_is_valid ...[2024-06-11 05:58:46.653721] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:16.092 [2024-06-11 05:58:46.653815] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:16.092 [2024-06-11 05:58:46.653875] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:d6fced3b-46d8-4614-93f8-cfad83e2146": uuid is not the correct length 00:06:16.092 [2024-06-11 05:58:46.653929] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:16.092 passed 00:06:16.092 Test: test_nvmf_ns_reservation_restore ...[2024-06-11 05:58:46.654074] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:16.092 passed 00:06:16.092 Test: test_nvmf_subsystem_state_change ...passed 00:06:16.092 Test: test_nvmf_reservation_custom_ops ...passed 00:06:16.092 00:06:16.092 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.092 suites 1 1 n/a 0 0 00:06:16.092 tests 22 22 22 0 0 00:06:16.092 asserts 407 407 407 0 n/a 00:06:16.092 00:06:16.092 Elapsed time = 0.009 seconds 00:06:16.092 05:58:46 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:16.092 00:06:16.092 00:06:16.092 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.092 http://cunit.sourceforge.net/ 00:06:16.092 00:06:16.092 00:06:16.092 Suite: nvmf 00:06:16.092 Test: test_nvmf_tcp_create ...[2024-06-11 05:58:46.734334] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:16.092 passed 00:06:16.353 Test: test_nvmf_tcp_destroy ...passed 00:06:16.353 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:16.353 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:16.353 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:16.353 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:16.353 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:16.353 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-06-11 05:58:46.875082] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-06-11 05:58:46.875200] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.875329] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.875388] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.875433] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_icreq_handle ...[2024-06-11 05:58:46.875564] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:16.353 [2024-06-11 05:58:46.875673] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.875761] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:16.353 Test: test_nvmf_tcp_invalid_sgl ...[2024-06-11 05:58:46.875822] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:16.353 [2024-06-11 05:58:46.875876] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.875922] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.875973] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.876014] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.876088] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.876230] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:16.353 [2024-06-11 05:58:46.876295] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-06-11 05:58:46.876334] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459260 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.876400] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffc08459fc0 00:06:16.353 [2024-06-11 05:58:46.876518] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.876598] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.876669] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffc08459720 00:06:16.353 [2024-06-11 05:58:46.876719] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.876771] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.876844] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:16.353 [2024-06-11 05:58:46.876904] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.876969] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.877032] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:16.353 [2024-06-11 05:58:46.877089] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877150] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.877211] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877264] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.877349] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877399] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.877472] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877516] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.877573] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877618] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-06-11 05:58:46.877694] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877739] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 [2024-06-11 05:58:46.877795] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:16.353 [2024-06-11 05:58:46.877849] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc08459720 is same with the state(5) to be set 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-06-11 05:58:46.919756] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-06-11 05:58:46.919892] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:16.353 [2024-06-11 05:58:46.920350] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:16.353 passed 00:06:16.353 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-06-11 05:58:46.920405] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:16.353 [2024-06-11 05:58:46.920631] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:16.353 passed 00:06:16.353 00:06:16.353 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.353 suites 1 1 n/a 0 0 00:06:16.353 tests 17 17 17 0 0 00:06:16.353 asserts 222 222 222 0 n/a 00:06:16.353 00:06:16.353 Elapsed time = 0.210 seconds[2024-06-11 05:58:46.920682] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:16.353 00:06:16.613 05:58:47 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:16.613 00:06:16.613 00:06:16.613 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.613 http://cunit.sourceforge.net/ 00:06:16.613 00:06:16.613 00:06:16.613 Suite: nvmf 00:06:16.613 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:16.613 00:06:16.613 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.613 suites 1 1 n/a 0 0 00:06:16.613 tests 1 1 1 0 0 00:06:16.613 asserts 17 17 17 0 n/a 00:06:16.613 00:06:16.613 Elapsed time = 0.032 seconds 00:06:16.613 00:06:16.613 real 0m0.649s 00:06:16.613 user 0m0.324s 00:06:16.613 sys 0m0.310s 00:06:16.613 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.613 ************************************ 00:06:16.613 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.613 END TEST unittest_nvmf 00:06:16.613 ************************************ 00:06:16.613 05:58:47 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.613 05:58:47 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.613 05:58:47 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:16.613 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.613 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.613 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.613 ************************************ 00:06:16.613 START TEST unittest_nvmf_rdma 00:06:16.613 ************************************ 00:06:16.613 05:58:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:16.613 00:06:16.613 00:06:16.613 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.613 http://cunit.sourceforge.net/ 00:06:16.613 00:06:16.613 00:06:16.613 Suite: nvmf 00:06:16.613 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-06-11 05:58:47.218794] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:16.613 [2024-06-11 05:58:47.219920] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:16.613 [2024-06-11 05:58:47.220056] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:16.613 passed 00:06:16.613 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:16.613 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:16.613 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:16.613 Test: test_nvmf_rdma_opts_init ...passed 00:06:16.613 Test: test_nvmf_rdma_request_free_data ...passed 00:06:16.613 Test: test_nvmf_rdma_update_ibv_state ...[2024-06-11 05:58:47.222594] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:16.613 passed 00:06:16.614 Test: test_nvmf_rdma_resources_create ...[2024-06-11 05:58:47.222692] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:16.614 passed 00:06:16.614 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:16.614 Test: test_nvmf_rdma_resize_cq ...[2024-06-11 05:58:47.225664] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:16.614 Using CQ of insufficient size may lead to CQ overrun 00:06:16.614 [2024-06-11 05:58:47.225992] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:16.614 [2024-06-11 05:58:47.226139] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:16.614 passed 00:06:16.614 00:06:16.614 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.614 suites 1 1 n/a 0 0 00:06:16.614 tests 10 10 10 0 0 00:06:16.614 asserts 584 584 584 0 n/a 00:06:16.614 00:06:16.614 Elapsed time = 0.007 seconds 00:06:16.614 00:06:16.614 real 0m0.054s 00:06:16.614 user 0m0.024s 00:06:16.614 sys 0m0.028s 00:06:16.614 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.614 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.614 ************************************ 00:06:16.614 END TEST unittest_nvmf_rdma 00:06:16.614 ************************************ 00:06:16.875 05:58:47 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:16.875 05:58:47 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:16.875 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.875 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.875 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.876 ************************************ 00:06:16.876 START TEST unittest_scsi 00:06:16.876 ************************************ 00:06:16.876 05:58:47 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:06:16.876 05:58:47 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:16.876 00:06:16.876 00:06:16.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.876 http://cunit.sourceforge.net/ 00:06:16.876 00:06:16.876 00:06:16.876 Suite: dev_suite 00:06:16.876 Test: dev_destruct_null_dev ...passed 00:06:16.876 Test: dev_destruct_zero_luns ...passed 00:06:16.876 Test: dev_destruct_null_lun ...passed 00:06:16.876 Test: dev_destruct_success ...passed 00:06:16.876 Test: dev_construct_num_luns_zero ...[2024-06-11 05:58:47.329301] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:16.876 passed 00:06:16.876 Test: dev_construct_no_lun_zero ...[2024-06-11 05:58:47.330284] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:16.876 passed 00:06:16.876 Test: dev_construct_null_lun ...[2024-06-11 05:58:47.330713] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:16.876 passed 00:06:16.876 Test: dev_construct_name_too_long ...[2024-06-11 05:58:47.331152] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:16.876 passed 00:06:16.876 Test: dev_construct_success ...passed 00:06:16.876 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:16.876 Test: dev_queue_mgmt_task_success ...passed 00:06:16.876 Test: dev_queue_task_success ...passed 00:06:16.876 Test: dev_stop_success ...passed 00:06:16.876 Test: dev_add_port_max_ports ...[2024-06-11 05:58:47.333261] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:16.876 passed 00:06:16.876 Test: dev_add_port_construct_failure1 ...[2024-06-11 05:58:47.333869] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:16.876 passed 00:06:16.876 Test: dev_add_port_construct_failure2 ...[2024-06-11 05:58:47.334463] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:16.876 passed 00:06:16.876 Test: dev_add_port_success1 ...passed 00:06:16.876 Test: dev_add_port_success2 ...passed 00:06:16.876 Test: dev_add_port_success3 ...passed 00:06:16.876 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:16.876 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:16.876 Test: dev_find_port_by_id_success ...passed 00:06:16.876 Test: dev_add_lun_bdev_not_found ...passed 00:06:16.876 Test: dev_add_lun_no_free_lun_id ...[2024-06-11 05:58:47.336601] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:16.876 passed 00:06:16.876 Test: dev_add_lun_success1 ...passed 00:06:16.876 Test: dev_add_lun_success2 ...passed 00:06:16.876 Test: dev_check_pending_tasks ...passed 00:06:16.876 Test: dev_iterate_luns ...passed 00:06:16.876 Test: dev_find_free_lun ...passed 00:06:16.876 00:06:16.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.876 suites 1 1 n/a 0 0 00:06:16.876 tests 29 29 29 0 0 00:06:16.876 asserts 97 97 97 0 n/a 00:06:16.876 00:06:16.876 Elapsed time = 0.005 seconds 00:06:16.876 05:58:47 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:16.876 00:06:16.876 00:06:16.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.876 http://cunit.sourceforge.net/ 00:06:16.876 00:06:16.876 00:06:16.876 Suite: lun_suite 00:06:16.876 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-06-11 05:58:47.384047] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:16.876 passed 00:06:16.876 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-06-11 05:58:47.384588] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:16.876 passed 00:06:16.876 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:16.876 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:16.876 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:16.876 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:16.876 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...[2024-06-11 05:58:47.384857] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:16.876 passed 00:06:16.876 Test: lun_append_task_null_lun_not_supported ...passed 00:06:16.876 Test: lun_execute_scsi_task_pending ...passed 00:06:16.876 Test: lun_execute_scsi_task_complete ...passed 00:06:16.876 Test: lun_execute_scsi_task_resize ...passed 00:06:16.876 Test: lun_destruct_success ...passed 00:06:16.876 Test: lun_construct_null_ctx ...[2024-06-11 05:58:47.385160] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:16.876 passed 00:06:16.876 Test: lun_construct_success ...passed 00:06:16.876 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:16.876 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:16.876 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:16.876 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:16.876 00:06:16.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.876 suites 1 1 n/a 0 0 00:06:16.876 tests 18 18 18 0 0 00:06:16.876 asserts 153 153 153 0 n/a 00:06:16.876 00:06:16.876 Elapsed time = 0.002 seconds 00:06:16.876 05:58:47 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:16.876 00:06:16.876 00:06:16.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.876 http://cunit.sourceforge.net/ 00:06:16.876 00:06:16.876 00:06:16.876 Suite: scsi_suite 00:06:16.876 Test: scsi_init ...passed 00:06:16.876 00:06:16.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.876 suites 1 1 n/a 0 0 00:06:16.876 tests 1 1 1 0 0 00:06:16.876 asserts 1 1 1 0 n/a 00:06:16.876 00:06:16.876 Elapsed time = 0.000 seconds 00:06:16.876 05:58:47 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:16.876 00:06:16.876 00:06:16.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.876 http://cunit.sourceforge.net/ 00:06:16.876 00:06:16.876 00:06:16.876 Suite: translation_suite 00:06:16.876 Test: mode_select_6_test ...passed 00:06:16.876 Test: mode_select_6_test2 ...passed 00:06:16.876 Test: mode_sense_6_test ...passed 00:06:16.876 Test: mode_sense_10_test ...passed 00:06:16.876 Test: inquiry_evpd_test ...passed 00:06:16.876 Test: inquiry_standard_test ...passed 00:06:16.876 Test: inquiry_overflow_test ...passed 00:06:16.876 Test: task_complete_test ...passed 00:06:16.876 Test: lba_range_test ...passed 00:06:16.876 Test: xfer_len_test ...[2024-06-11 05:58:47.466183] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:16.876 passed 00:06:16.876 Test: xfer_test ...passed 00:06:16.876 Test: scsi_name_padding_test ...passed 00:06:16.876 Test: get_dif_ctx_test ...passed 00:06:16.876 Test: unmap_split_test ...passed 00:06:16.876 00:06:16.876 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.876 suites 1 1 n/a 0 0 00:06:16.876 tests 14 14 14 0 0 00:06:16.876 asserts 1200 1200 1200 0 n/a 00:06:16.876 00:06:16.876 Elapsed time = 0.006 seconds 00:06:16.876 05:58:47 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:16.876 00:06:16.876 00:06:16.876 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.876 http://cunit.sourceforge.net/ 00:06:16.876 00:06:16.876 00:06:16.876 Suite: reservation_suite 00:06:16.876 Test: test_reservation_register ...[2024-06-11 05:58:47.502126] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.876 passed 00:06:16.876 Test: test_reservation_reserve ...[2024-06-11 05:58:47.502522] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.876 passed 00:06:16.876 Test: test_reservation_preempt_non_all_regs ...[2024-06-11 05:58:47.502614] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:16.876 [2024-06-11 05:58:47.502740] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:16.876 [2024-06-11 05:58:47.502819] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.876 [2024-06-11 05:58:47.502913] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:16.876 passed 00:06:16.876 Test: test_reservation_preempt_all_regs ...[2024-06-11 05:58:47.503079] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.876 passed 00:06:16.876 Test: test_reservation_cmds_conflict ...[2024-06-11 05:58:47.503249] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.876 [2024-06-11 05:58:47.503335] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:16.876 [2024-06-11 05:58:47.503391] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:16.876 [2024-06-11 05:58:47.503437] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:16.876 [2024-06-11 05:58:47.503499] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:16.877 [2024-06-11 05:58:47.503542] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:16.877 passed 00:06:16.877 Test: test_scsi2_reserve_release ...passed 00:06:16.877 Test: test_pr_with_scsi2_reserve_release ...passed 00:06:16.877 00:06:16.877 [2024-06-11 05:58:47.503664] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:16.877 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.877 suites 1 1 n/a 0 0 00:06:16.877 tests 7 7 7 0 0 00:06:16.877 asserts 257 257 257 0 n/a 00:06:16.877 00:06:16.877 Elapsed time = 0.002 seconds 00:06:17.135 00:06:17.135 real 0m0.218s 00:06:17.135 user 0m0.095s 00:06:17.135 sys 0m0.118s 00:06:17.135 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.135 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.135 ************************************ 00:06:17.135 END TEST unittest_scsi 00:06:17.135 ************************************ 00:06:17.135 05:58:47 -- unit/unittest.sh@276 -- # uname -s 00:06:17.135 05:58:47 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:17.135 05:58:47 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:17.135 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.135 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.135 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.135 ************************************ 00:06:17.135 START TEST unittest_sock 00:06:17.135 ************************************ 00:06:17.135 05:58:47 -- common/autotest_common.sh@1104 -- # unittest_sock 00:06:17.135 05:58:47 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:17.135 00:06:17.135 00:06:17.135 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.135 http://cunit.sourceforge.net/ 00:06:17.135 00:06:17.135 00:06:17.135 Suite: sock 00:06:17.135 Test: posix_sock ...passed 00:06:17.135 Test: ut_sock ...passed 00:06:17.135 Test: posix_sock_group ...passed 00:06:17.135 Test: ut_sock_group ...passed 00:06:17.135 Test: posix_sock_group_fairness ...passed 00:06:17.135 Test: _posix_sock_close ...passed 00:06:17.135 Test: sock_get_default_opts ...passed 00:06:17.135 Test: ut_sock_impl_get_set_opts ...passed 00:06:17.135 Test: posix_sock_impl_get_set_opts ...passed 00:06:17.135 Test: ut_sock_map ...passed 00:06:17.135 Test: override_impl_opts ...passed 00:06:17.135 Test: ut_sock_group_get_ctx ...passed 00:06:17.135 00:06:17.135 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.135 suites 1 1 n/a 0 0 00:06:17.135 tests 12 12 12 0 0 00:06:17.135 asserts 349 349 349 0 n/a 00:06:17.135 00:06:17.135 Elapsed time = 0.008 seconds 00:06:17.135 05:58:47 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:17.135 00:06:17.135 00:06:17.135 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.135 http://cunit.sourceforge.net/ 00:06:17.135 00:06:17.135 00:06:17.135 Suite: posix 00:06:17.135 Test: flush ...passed 00:06:17.135 00:06:17.135 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.135 suites 1 1 n/a 0 0 00:06:17.135 tests 1 1 1 0 0 00:06:17.135 asserts 28 28 28 0 n/a 00:06:17.135 00:06:17.135 Elapsed time = 0.000 seconds 00:06:17.135 05:58:47 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:17.135 00:06:17.135 real 0m0.106s 00:06:17.135 user 0m0.040s 00:06:17.135 sys 0m0.044s 00:06:17.135 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.135 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.135 ************************************ 00:06:17.135 END TEST unittest_sock 00:06:17.135 ************************************ 00:06:17.135 05:58:47 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:17.135 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.136 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.136 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.136 ************************************ 00:06:17.136 START TEST unittest_thread 00:06:17.136 ************************************ 00:06:17.136 05:58:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:17.136 00:06:17.136 00:06:17.136 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.136 http://cunit.sourceforge.net/ 00:06:17.136 00:06:17.136 00:06:17.136 Suite: io_channel 00:06:17.395 Test: thread_alloc ...passed 00:06:17.395 Test: thread_send_msg ...passed 00:06:17.395 Test: thread_poller ...passed 00:06:17.395 Test: poller_pause ...passed 00:06:17.395 Test: thread_for_each ...passed 00:06:17.395 Test: for_each_channel_remove ...passed 00:06:17.395 Test: for_each_channel_unreg ...[2024-06-11 05:58:47.799406] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffd44733510 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:17.395 passed 00:06:17.395 Test: thread_name ...passed 00:06:17.395 Test: channel ...[2024-06-11 05:58:47.806853] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55e93d5530e0 00:06:17.395 passed 00:06:17.395 Test: channel_destroy_races ...passed 00:06:17.395 Test: thread_exit_test ...[2024-06-11 05:58:47.818514] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:06:17.395 passed 00:06:17.395 Test: thread_update_stats_test ...passed 00:06:17.395 Test: nested_channel ...passed 00:06:17.395 Test: device_unregister_and_thread_exit_race ...passed 00:06:17.395 Test: cache_closest_timed_poller ...passed 00:06:17.395 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:17.395 Test: io_device_lookup ...passed 00:06:17.395 Test: spdk_spin ...[2024-06-11 05:58:47.834952] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:17.395 [2024-06-11 05:58:47.835150] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd44733500 00:06:17.395 [2024-06-11 05:58:47.835389] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:17.395 [2024-06-11 05:58:47.837329] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:17.395 [2024-06-11 05:58:47.837533] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd44733500 00:06:17.395 [2024-06-11 05:58:47.837674] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:17.395 [2024-06-11 05:58:47.837839] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd44733500 00:06:17.395 [2024-06-11 05:58:47.837987] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:17.395 [2024-06-11 05:58:47.838138] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd44733500 00:06:17.395 [2024-06-11 05:58:47.838261] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:17.395 [2024-06-11 05:58:47.838434] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffd44733500 00:06:17.395 passed 00:06:17.395 Test: for_each_channel_and_thread_exit_race ...passed 00:06:17.395 Test: for_each_thread_and_thread_exit_race ...passed 00:06:17.395 00:06:17.395 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.395 suites 1 1 n/a 0 0 00:06:17.395 tests 20 20 20 0 0 00:06:17.395 asserts 409 409 409 0 n/a 00:06:17.395 00:06:17.395 Elapsed time = 0.057 seconds 00:06:17.395 00:06:17.395 real 0m0.117s 00:06:17.395 user 0m0.060s 00:06:17.395 sys 0m0.043s 00:06:17.395 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.395 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 END TEST unittest_thread 00:06:17.395 ************************************ 00:06:17.395 05:58:47 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:17.395 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.395 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.395 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 START TEST unittest_iobuf 00:06:17.395 ************************************ 00:06:17.395 05:58:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:17.395 00:06:17.395 00:06:17.395 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.395 http://cunit.sourceforge.net/ 00:06:17.395 00:06:17.395 00:06:17.395 Suite: io_channel 00:06:17.395 Test: iobuf ...passed 00:06:17.395 Test: iobuf_cache ...[2024-06-11 05:58:47.952588] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:17.395 [2024-06-11 05:58:47.953042] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:17.395 [2024-06-11 05:58:47.953270] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:17.395 [2024-06-11 05:58:47.953366] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:17.395 [2024-06-11 05:58:47.953491] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:17.395 [2024-06-11 05:58:47.953595] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:17.395 passed 00:06:17.395 00:06:17.395 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.395 suites 1 1 n/a 0 0 00:06:17.395 tests 2 2 2 0 0 00:06:17.395 asserts 107 107 107 0 n/a 00:06:17.395 00:06:17.395 Elapsed time = 0.008 seconds 00:06:17.395 00:06:17.395 real 0m0.053s 00:06:17.395 user 0m0.023s 00:06:17.395 sys 0m0.030s 00:06:17.395 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.395 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 END TEST unittest_iobuf 00:06:17.395 ************************************ 00:06:17.395 05:58:48 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:06:17.395 05:58:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.395 05:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.395 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 START TEST unittest_util 00:06:17.395 ************************************ 00:06:17.395 05:58:48 -- common/autotest_common.sh@1104 -- # unittest_util 00:06:17.395 05:58:48 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: base64 00:06:17.655 Test: test_base64_get_encoded_strlen ...passed 00:06:17.655 Test: test_base64_get_decoded_len ...passed 00:06:17.655 Test: test_base64_encode ...passed 00:06:17.655 Test: test_base64_decode ...passed 00:06:17.655 Test: test_base64_urlsafe_encode ...passed 00:06:17.655 Test: test_base64_urlsafe_decode ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 6 6 6 0 0 00:06:17.655 asserts 112 112 112 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.000 seconds 00:06:17.655 05:58:48 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: bit_array 00:06:17.655 Test: test_1bit ...passed 00:06:17.655 Test: test_64bit ...passed 00:06:17.655 Test: test_find ...passed 00:06:17.655 Test: test_resize ...passed 00:06:17.655 Test: test_errors ...passed 00:06:17.655 Test: test_count ...passed 00:06:17.655 Test: test_mask_store_load ...passed 00:06:17.655 Test: test_mask_clear ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 8 8 8 0 0 00:06:17.655 asserts 5075 5075 5075 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.002 seconds 00:06:17.655 05:58:48 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: cpuset 00:06:17.655 Test: test_cpuset ...passed 00:06:17.655 Test: test_cpuset_parse ...[2024-06-11 05:58:48.129002] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:17.655 [2024-06-11 05:58:48.129569] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:17.655 [2024-06-11 05:58:48.129753] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:17.655 [2024-06-11 05:58:48.129920] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:17.655 [2024-06-11 05:58:48.130000] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:17.655 [2024-06-11 05:58:48.130080] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:17.655 [2024-06-11 05:58:48.130149] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:17.655 [2024-06-11 05:58:48.130244] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:17.655 passed 00:06:17.655 Test: test_cpuset_fmt ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 3 3 3 0 0 00:06:17.655 asserts 65 65 65 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.003 seconds 00:06:17.655 05:58:48 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: crc16 00:06:17.655 Test: test_crc16_t10dif ...passed 00:06:17.655 Test: test_crc16_t10dif_seed ...passed 00:06:17.655 Test: test_crc16_t10dif_copy ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 3 3 3 0 0 00:06:17.655 asserts 5 5 5 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.000 seconds 00:06:17.655 05:58:48 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: crc32_ieee 00:06:17.655 Test: test_crc32_ieee ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 1 1 1 0 0 00:06:17.655 asserts 1 1 1 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.000 seconds 00:06:17.655 05:58:48 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: crc32c 00:06:17.655 Test: test_crc32c ...passed 00:06:17.655 Test: test_crc32c_nvme ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 2 2 2 0 0 00:06:17.655 asserts 16 16 16 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.000 seconds 00:06:17.655 05:58:48 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: crc64 00:06:17.655 Test: test_crc64_nvme ...passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 1 1 1 0 0 00:06:17.655 asserts 4 4 4 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.000 seconds 00:06:17.917 05:58:48 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:17.917 00:06:17.917 00:06:17.917 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.917 http://cunit.sourceforge.net/ 00:06:17.917 00:06:17.917 00:06:17.917 Suite: string 00:06:17.917 Test: test_parse_ip_addr ...passed 00:06:17.917 Test: test_str_chomp ...passed 00:06:17.917 Test: test_parse_capacity ...passed 00:06:17.917 Test: test_sprintf_append_realloc ...passed 00:06:17.917 Test: test_strtol ...passed 00:06:17.917 Test: test_strtoll ...passed 00:06:17.917 Test: test_strarray ...passed 00:06:17.917 Test: test_strcpy_replace ...passed 00:06:17.917 00:06:17.917 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.917 suites 1 1 n/a 0 0 00:06:17.917 tests 8 8 8 0 0 00:06:17.917 asserts 161 161 161 0 n/a 00:06:17.917 00:06:17.917 Elapsed time = 0.001 seconds 00:06:17.917 05:58:48 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:17.917 00:06:17.917 00:06:17.917 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.917 http://cunit.sourceforge.net/ 00:06:17.917 00:06:17.917 00:06:17.917 Suite: dif 00:06:17.917 Test: dif_generate_and_verify_test ...[2024-06-11 05:58:48.375926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:17.917 [2024-06-11 05:58:48.376517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:17.917 [2024-06-11 05:58:48.376837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:17.917 [2024-06-11 05:58:48.377140] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:17.917 [2024-06-11 05:58:48.377437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:17.917 [2024-06-11 05:58:48.377746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:17.917 passed 00:06:17.917 Test: dif_disable_check_test ...[2024-06-11 05:58:48.378791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:17.917 [2024-06-11 05:58:48.379195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:17.917 [2024-06-11 05:58:48.379494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:17.917 passed 00:06:17.917 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-06-11 05:58:48.380614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:17.917 [2024-06-11 05:58:48.381000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:17.917 [2024-06-11 05:58:48.381380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:17.917 [2024-06-11 05:58:48.381803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:17.917 [2024-06-11 05:58:48.382194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.917 [2024-06-11 05:58:48.382579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.917 [2024-06-11 05:58:48.382964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.917 [2024-06-11 05:58:48.383327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:17.917 [2024-06-11 05:58:48.383691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:17.917 [2024-06-11 05:58:48.384086] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:17.917 [2024-06-11 05:58:48.384473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:17.917 passed 00:06:17.917 Test: dif_apptag_mask_test ...[2024-06-11 05:58:48.384823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:17.917 [2024-06-11 05:58:48.385141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:17.917 passed 00:06:17.917 Test: dif_sec_512_md_0_error_test ...[2024-06-11 05:58:48.385361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.917 passed 00:06:17.917 Test: dif_sec_4096_md_0_error_test ...[2024-06-11 05:58:48.385431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.917 [2024-06-11 05:58:48.385479] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:17.917 passed 00:06:17.917 Test: dif_sec_4100_md_128_error_test ...passed 00:06:17.917 Test: dif_guard_seed_test ...[2024-06-11 05:58:48.385556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:17.917 [2024-06-11 05:58:48.385607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:17.917 passed 00:06:17.917 Test: dif_guard_value_test ...passed 00:06:17.917 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:17.917 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:17.917 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-11 05:58:48.430994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f54c, Actual=fd4c 00:06:17.917 [2024-06-11 05:58:48.433622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f621, Actual=fe21 00:06:17.917 [2024-06-11 05:58:48.436224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:17.917 [2024-06-11 05:58:48.438835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:17.917 [2024-06-11 05:58:48.441476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:17.917 [2024-06-11 05:58:48.444068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:17.917 [2024-06-11 05:58:48.446682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=5c85 00:06:17.917 [2024-06-11 05:58:48.448545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=9c65 00:06:17.917 [2024-06-11 05:58:48.450417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=12b753ed, Actual=1ab753ed 00:06:17.917 [2024-06-11 05:58:48.453030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=30574660, Actual=38574660 00:06:17.917 [2024-06-11 05:58:48.455662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:17.917 [2024-06-11 05:58:48.458256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:17.917 [2024-06-11 05:58:48.460872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:17.917 [2024-06-11 05:58:48.463527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:17.917 [2024-06-11 05:58:48.466154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=7ca60735 00:06:17.917 [2024-06-11 05:58:48.468029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=d3676390 00:06:17.917 [2024-06-11 05:58:48.469928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:17.917 [2024-06-11 05:58:48.472553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:06:17.917 [2024-06-11 05:58:48.475171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:17.917 [2024-06-11 05:58:48.477790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:17.917 [2024-06-11 05:58:48.480390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000005d 00:06:17.917 [2024-06-11 05:58:48.483000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000005d 00:06:17.917 [2024-06-11 05:58:48.485637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:17.917 [2024-06-11 05:58:48.487501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=800b6b673440097b 00:06:17.917 passed 00:06:17.918 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-06-11 05:58:48.488492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:06:17.918 [2024-06-11 05:58:48.488816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:06:17.918 [2024-06-11 05:58:48.489117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.489433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.489775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.490084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.490387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5c85 00:06:17.918 [2024-06-11 05:58:48.490568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9c65 00:06:17.918 [2024-06-11 05:58:48.490756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:06:17.918 [2024-06-11 05:58:48.491063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:06:17.918 [2024-06-11 05:58:48.491407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.491710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.492023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.492326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.492627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7ca60735 00:06:17.918 [2024-06-11 05:58:48.492804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d3676390 00:06:17.918 [2024-06-11 05:58:48.493007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:17.918 [2024-06-11 05:58:48.493315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:06:17.918 [2024-06-11 05:58:48.493633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.493932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.494242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.918 [2024-06-11 05:58:48.494553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.918 [2024-06-11 05:58:48.494879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:17.918 [2024-06-11 05:58:48.495068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=800b6b673440097b 00:06:17.918 passed 00:06:17.918 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-06-11 05:58:48.495302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:06:17.918 [2024-06-11 05:58:48.495619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:06:17.918 [2024-06-11 05:58:48.495928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.496242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.496557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.496881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.497192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5c85 00:06:17.918 [2024-06-11 05:58:48.497397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9c65 00:06:17.918 [2024-06-11 05:58:48.497568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:06:17.918 [2024-06-11 05:58:48.497881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:06:17.918 [2024-06-11 05:58:48.498180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.498490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.498802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.499117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.499443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7ca60735 00:06:17.918 [2024-06-11 05:58:48.499629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d3676390 00:06:17.918 [2024-06-11 05:58:48.499833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:17.918 [2024-06-11 05:58:48.500130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:06:17.918 [2024-06-11 05:58:48.500440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.500740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.501064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.918 [2024-06-11 05:58:48.501362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.918 [2024-06-11 05:58:48.501698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:17.918 [2024-06-11 05:58:48.501871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=800b6b673440097b 00:06:17.918 passed 00:06:17.918 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-06-11 05:58:48.502102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:06:17.918 [2024-06-11 05:58:48.502423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:06:17.918 [2024-06-11 05:58:48.502740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.503050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.503408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.503717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.504033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5c85 00:06:17.918 [2024-06-11 05:58:48.504216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9c65 00:06:17.918 [2024-06-11 05:58:48.504396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:06:17.918 [2024-06-11 05:58:48.504703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:06:17.918 [2024-06-11 05:58:48.505063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.505378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.505690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.506001] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.918 [2024-06-11 05:58:48.506312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7ca60735 00:06:17.918 [2024-06-11 05:58:48.506500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d3676390 00:06:17.918 [2024-06-11 05:58:48.506688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:17.918 [2024-06-11 05:58:48.506994] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:06:17.918 [2024-06-11 05:58:48.507311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.507620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.507925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.918 [2024-06-11 05:58:48.508240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.918 [2024-06-11 05:58:48.508571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:17.918 [2024-06-11 05:58:48.508752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=800b6b673440097b 00:06:17.918 passed 00:06:17.918 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-06-11 05:58:48.508997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:06:17.918 [2024-06-11 05:58:48.509304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:06:17.918 [2024-06-11 05:58:48.509611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.509937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.918 [2024-06-11 05:58:48.510269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.510576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.510893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5c85 00:06:17.919 [2024-06-11 05:58:48.511082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9c65 00:06:17.919 passed 00:06:17.919 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-06-11 05:58:48.511327] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:06:17.919 [2024-06-11 05:58:48.511648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:06:17.919 [2024-06-11 05:58:48.511984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.512288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.512596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.512918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.513226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7ca60735 00:06:17.919 [2024-06-11 05:58:48.513402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d3676390 00:06:17.919 [2024-06-11 05:58:48.513632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:17.919 [2024-06-11 05:58:48.513954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:06:17.919 [2024-06-11 05:58:48.514256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.514554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.514853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.919 [2024-06-11 05:58:48.515179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.919 [2024-06-11 05:58:48.515515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:17.919 [2024-06-11 05:58:48.515704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=800b6b673440097b 00:06:17.919 passed 00:06:17.919 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-06-11 05:58:48.515937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f54c, Actual=fd4c 00:06:17.919 [2024-06-11 05:58:48.516258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f621, Actual=fe21 00:06:17.919 [2024-06-11 05:58:48.516565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.516890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.517220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.517528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.517834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5c85 00:06:17.919 [2024-06-11 05:58:48.518015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=9c65 00:06:17.919 passed 00:06:17.919 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-06-11 05:58:48.518225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=12b753ed, Actual=1ab753ed 00:06:17.919 [2024-06-11 05:58:48.518524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=30574660, Actual=38574660 00:06:17.919 [2024-06-11 05:58:48.518861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.519168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.519499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.519810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8000058 00:06:17.919 [2024-06-11 05:58:48.520125] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=7ca60735 00:06:17.919 [2024-06-11 05:58:48.520306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=d3676390 00:06:17.919 [2024-06-11 05:58:48.520559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:17.919 [2024-06-11 05:58:48.520886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4037a266, Actual=88010a2d4837a266 00:06:17.919 [2024-06-11 05:58:48.521193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.521492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=888 00:06:17.919 [2024-06-11 05:58:48.521801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.919 [2024-06-11 05:58:48.522117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000000058 00:06:17.919 [2024-06-11 05:58:48.522455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:17.919 [2024-06-11 05:58:48.522647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=800b6b673440097b 00:06:17.919 passed 00:06:17.919 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:17.919 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:17.919 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:17.919 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:18.180 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:18.180 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:18.180 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:18.180 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:18.180 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:18.180 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-11 05:58:48.568062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f54c, Actual=fd4c 00:06:18.180 [2024-06-11 05:58:48.569216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bad0, Actual=b2d0 00:06:18.180 [2024-06-11 05:58:48.570347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.571474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.572606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.180 [2024-06-11 05:58:48.573744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.180 [2024-06-11 05:58:48.574854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=5c85 00:06:18.180 [2024-06-11 05:58:48.575983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=9305 00:06:18.180 [2024-06-11 05:58:48.577119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=12b753ed, Actual=1ab753ed 00:06:18.180 [2024-06-11 05:58:48.578251] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=119f2e9, Actual=919f2e9 00:06:18.180 [2024-06-11 05:58:48.579407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.580555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.581674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.180 [2024-06-11 05:58:48.582801] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.180 [2024-06-11 05:58:48.583924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=7ca60735 00:06:18.180 [2024-06-11 05:58:48.585047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=b10d4068 00:06:18.180 [2024-06-11 05:58:48.586155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:18.180 [2024-06-11 05:58:48.587309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=23430a1ceff1ac9f, Actual=23430a1ce7f1ac9f 00:06:18.180 [2024-06-11 05:58:48.588417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.589544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.590647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000005d 00:06:18.180 [2024-06-11 05:58:48.591772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000005d 00:06:18.180 [2024-06-11 05:58:48.592870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:18.180 [2024-06-11 05:58:48.594022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=b5c08eb213dc1a2d 00:06:18.180 passed 00:06:18.180 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-11 05:58:48.594377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:06:18.180 [2024-06-11 05:58:48.594657] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:06:18.180 [2024-06-11 05:58:48.594936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.595227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.595532] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.180 [2024-06-11 05:58:48.595835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.180 [2024-06-11 05:58:48.596110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=5c85 00:06:18.180 [2024-06-11 05:58:48.596387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=f084 00:06:18.180 [2024-06-11 05:58:48.596662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=12b753ed, Actual=1ab753ed 00:06:18.180 [2024-06-11 05:58:48.596958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c099c71c, Actual=c899c71c 00:06:18.180 [2024-06-11 05:58:48.597263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.597546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.597833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.180 [2024-06-11 05:58:48.598118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.180 [2024-06-11 05:58:48.598392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=7ca60735 00:06:18.180 [2024-06-11 05:58:48.598666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=708d759d 00:06:18.180 [2024-06-11 05:58:48.598951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:18.180 [2024-06-11 05:58:48.599228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d6a1058fc91e28c0, Actual=d6a1058fc11e28c0 00:06:18.180 [2024-06-11 05:58:48.599508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.599783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.600066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:06:18.180 [2024-06-11 05:58:48.600342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:06:18.180 [2024-06-11 05:58:48.600645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:18.180 [2024-06-11 05:58:48.600936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=4022812135339e72 00:06:18.180 passed 00:06:18.180 Test: dix_sec_512_md_0_error ...passed 00:06:18.180 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-06-11 05:58:48.601030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:18.180 passed 00:06:18.180 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:18.180 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:18.180 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:18.180 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:18.180 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:18.180 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:18.180 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:18.180 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:18.180 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-06-11 05:58:48.645023] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f54c, Actual=fd4c 00:06:18.180 [2024-06-11 05:58:48.646155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bad0, Actual=b2d0 00:06:18.180 [2024-06-11 05:58:48.647265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.180 [2024-06-11 05:58:48.648371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.649510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.181 [2024-06-11 05:58:48.650625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.181 [2024-06-11 05:58:48.651735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=5c85 00:06:18.181 [2024-06-11 05:58:48.652854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=9305 00:06:18.181 [2024-06-11 05:58:48.653956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=12b753ed, Actual=1ab753ed 00:06:18.181 [2024-06-11 05:58:48.655064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=119f2e9, Actual=919f2e9 00:06:18.181 [2024-06-11 05:58:48.656184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.657298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.658417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.181 [2024-06-11 05:58:48.659538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=800005d 00:06:18.181 [2024-06-11 05:58:48.660642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=7ca60735 00:06:18.181 [2024-06-11 05:58:48.661768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=b10d4068 00:06:18.181 [2024-06-11 05:58:48.662897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:18.181 [2024-06-11 05:58:48.664005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=23430a1ceff1ac9f, Actual=23430a1ce7f1ac9f 00:06:18.181 [2024-06-11 05:58:48.665117] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.666214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.667328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000005d 00:06:18.181 [2024-06-11 05:58:48.668423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=8000000005d 00:06:18.181 [2024-06-11 05:58:48.669544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:18.181 [2024-06-11 05:58:48.670639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=b5c08eb213dc1a2d 00:06:18.181 passed 00:06:18.181 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-06-11 05:58:48.671057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=f54c, Actual=fd4c 00:06:18.181 [2024-06-11 05:58:48.671347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d951, Actual=d151 00:06:18.181 [2024-06-11 05:58:48.671636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.671915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.672218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.181 [2024-06-11 05:58:48.672518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.181 [2024-06-11 05:58:48.672811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=5c85 00:06:18.181 [2024-06-11 05:58:48.673091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=f084 00:06:18.181 [2024-06-11 05:58:48.673372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=12b753ed, Actual=1ab753ed 00:06:18.181 [2024-06-11 05:58:48.673658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=c099c71c, Actual=c899c71c 00:06:18.181 [2024-06-11 05:58:48.673963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.674258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.674525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.181 [2024-06-11 05:58:48.674809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=8000059 00:06:18.181 [2024-06-11 05:58:48.675083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=7ca60735 00:06:18.181 [2024-06-11 05:58:48.675370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=708d759d 00:06:18.181 [2024-06-11 05:58:48.675642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a77286cc20d3, Actual=a576a7728ecc20d3 00:06:18.181 [2024-06-11 05:58:48.675914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=d6a1058fc91e28c0, Actual=d6a1058fc11e28c0 00:06:18.181 [2024-06-11 05:58:48.676180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.676445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=888 00:06:18.181 [2024-06-11 05:58:48.676713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:06:18.181 [2024-06-11 05:58:48.677000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=80000000059 00:06:18.181 [2024-06-11 05:58:48.677270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=78f43ef77707775d 00:06:18.181 [2024-06-11 05:58:48.677531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=4022812135339e72 00:06:18.181 passed 00:06:18.181 Test: set_md_interleave_iovs_test ...passed 00:06:18.181 Test: set_md_interleave_iovs_split_test ...passed 00:06:18.181 Test: dif_generate_stream_pi_16_test ...passed 00:06:18.181 Test: dif_generate_stream_test ...passed 00:06:18.181 Test: set_md_interleave_iovs_alignment_test ...[2024-06-11 05:58:48.685401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:18.181 passed 00:06:18.181 Test: dif_generate_split_test ...passed 00:06:18.181 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:18.181 Test: dif_verify_split_test ...passed 00:06:18.181 Test: dif_verify_stream_multi_segments_test ...passed 00:06:18.181 Test: update_crc32c_pi_16_test ...passed 00:06:18.181 Test: update_crc32c_test ...passed 00:06:18.181 Test: dif_update_crc32c_split_test ...passed 00:06:18.181 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:18.181 Test: get_range_with_md_test ...passed 00:06:18.181 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:18.181 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:18.181 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:18.181 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:18.181 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:18.181 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:18.181 Test: dif_generate_and_verify_unmap_test ...passed 00:06:18.181 00:06:18.181 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.181 suites 1 1 n/a 0 0 00:06:18.181 tests 79 79 79 0 0 00:06:18.181 asserts 3584 3584 3584 0 n/a 00:06:18.181 00:06:18.181 Elapsed time = 0.356 seconds 00:06:18.181 05:58:48 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:18.181 00:06:18.181 00:06:18.181 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.181 http://cunit.sourceforge.net/ 00:06:18.181 00:06:18.181 00:06:18.181 Suite: iov 00:06:18.181 Test: test_single_iov ...passed 00:06:18.181 Test: test_simple_iov ...passed 00:06:18.181 Test: test_complex_iov ...passed 00:06:18.181 Test: test_iovs_to_buf ...passed 00:06:18.181 Test: test_buf_to_iovs ...passed 00:06:18.181 Test: test_memset ...passed 00:06:18.181 Test: test_iov_one ...passed 00:06:18.181 Test: test_iov_xfer ...passed 00:06:18.181 00:06:18.181 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.181 suites 1 1 n/a 0 0 00:06:18.181 tests 8 8 8 0 0 00:06:18.181 asserts 156 156 156 0 n/a 00:06:18.181 00:06:18.181 Elapsed time = 0.000 seconds 00:06:18.181 05:58:48 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:18.181 00:06:18.181 00:06:18.181 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.181 http://cunit.sourceforge.net/ 00:06:18.181 00:06:18.181 00:06:18.181 Suite: math 00:06:18.181 Test: test_serial_number_arithmetic ...passed 00:06:18.181 Suite: erase 00:06:18.181 Test: test_memset_s ...passed 00:06:18.181 00:06:18.181 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.181 suites 2 2 n/a 0 0 00:06:18.181 tests 2 2 2 0 0 00:06:18.181 asserts 18 18 18 0 n/a 00:06:18.181 00:06:18.181 Elapsed time = 0.000 seconds 00:06:18.440 05:58:48 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:18.440 00:06:18.440 00:06:18.440 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.440 http://cunit.sourceforge.net/ 00:06:18.440 00:06:18.440 00:06:18.440 Suite: pipe 00:06:18.440 Test: test_create_destroy ...passed 00:06:18.440 Test: test_write_get_buffer ...passed 00:06:18.440 Test: test_write_advance ...passed 00:06:18.440 Test: test_read_get_buffer ...passed 00:06:18.440 Test: test_read_advance ...passed 00:06:18.440 Test: test_data ...passed 00:06:18.440 00:06:18.440 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.440 suites 1 1 n/a 0 0 00:06:18.440 tests 6 6 6 0 0 00:06:18.440 asserts 250 250 250 0 n/a 00:06:18.440 00:06:18.440 Elapsed time = 0.000 seconds 00:06:18.440 05:58:48 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:18.440 00:06:18.440 00:06:18.440 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.440 http://cunit.sourceforge.net/ 00:06:18.440 00:06:18.440 00:06:18.440 Suite: xor 00:06:18.440 Test: test_xor_gen ...passed 00:06:18.440 00:06:18.440 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.440 suites 1 1 n/a 0 0 00:06:18.440 tests 1 1 1 0 0 00:06:18.440 asserts 17 17 17 0 n/a 00:06:18.440 00:06:18.441 Elapsed time = 0.005 seconds 00:06:18.441 00:06:18.441 real 0m0.862s 00:06:18.441 user 0m0.575s 00:06:18.441 sys 0m0.281s 00:06:18.441 05:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.441 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 ************************************ 00:06:18.441 END TEST unittest_util 00:06:18.441 ************************************ 00:06:18.441 05:58:48 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:18.441 05:58:48 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:18.441 05:58:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.441 05:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.441 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 ************************************ 00:06:18.441 START TEST unittest_vhost 00:06:18.441 ************************************ 00:06:18.441 05:58:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:18.441 00:06:18.441 00:06:18.441 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.441 http://cunit.sourceforge.net/ 00:06:18.441 00:06:18.441 00:06:18.441 Suite: vhost_suite 00:06:18.441 Test: desc_to_iov_test ...[2024-06-11 05:58:48.995654] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:18.441 passed 00:06:18.441 Test: create_controller_test ...[2024-06-11 05:58:49.000739] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:18.441 [2024-06-11 05:58:49.000879] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:18.441 [2024-06-11 05:58:49.001026] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:18.441 [2024-06-11 05:58:49.001140] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:18.441 [2024-06-11 05:58:49.001226] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:18.441 [2024-06-11 05:58:49.001340] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxpassed 00:06:18.441 Test: session_find_by_vid_test ...[2024-06-11 05:58:49.002468] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:18.441 passed 00:06:18.441 Test: remove_controller_test ...[2024-06-11 05:58:49.004677] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:18.441 passed 00:06:18.441 Test: vq_avail_ring_get_test ...passed 00:06:18.441 Test: vq_packed_ring_test ...passed 00:06:18.441 Test: vhost_blk_construct_test ...passed 00:06:18.441 00:06:18.441 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.441 suites 1 1 n/a 0 0 00:06:18.441 tests 7 7 7 0 0 00:06:18.441 asserts 145 145 145 0 n/a 00:06:18.441 00:06:18.441 Elapsed time = 0.013 seconds 00:06:18.441 00:06:18.441 real 0m0.064s 00:06:18.441 user 0m0.008s 00:06:18.441 sys 0m0.056s 00:06:18.441 05:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.441 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.441 ************************************ 00:06:18.441 END TEST unittest_vhost 00:06:18.441 ************************************ 00:06:18.441 05:58:49 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:18.441 05:58:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.441 05:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.441 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.700 ************************************ 00:06:18.700 START TEST unittest_dma 00:06:18.700 ************************************ 00:06:18.700 05:58:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:18.700 00:06:18.700 00:06:18.700 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.700 http://cunit.sourceforge.net/ 00:06:18.700 00:06:18.700 00:06:18.700 Suite: dma_suite 00:06:18.700 Test: test_dma ...[2024-06-11 05:58:49.110032] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:18.700 passed 00:06:18.700 00:06:18.700 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.700 suites 1 1 n/a 0 0 00:06:18.700 tests 1 1 1 0 0 00:06:18.700 asserts 50 50 50 0 n/a 00:06:18.700 00:06:18.700 Elapsed time = 0.001 seconds 00:06:18.700 00:06:18.700 real 0m0.036s 00:06:18.700 user 0m0.020s 00:06:18.700 sys 0m0.015s 00:06:18.700 05:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.700 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.700 ************************************ 00:06:18.700 END TEST unittest_dma 00:06:18.700 ************************************ 00:06:18.700 05:58:49 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:06:18.700 05:58:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:18.700 05:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.700 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.700 ************************************ 00:06:18.700 START TEST unittest_init 00:06:18.700 ************************************ 00:06:18.700 05:58:49 -- common/autotest_common.sh@1104 -- # unittest_init 00:06:18.700 05:58:49 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:18.700 00:06:18.700 00:06:18.700 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.700 http://cunit.sourceforge.net/ 00:06:18.700 00:06:18.700 00:06:18.700 Suite: subsystem_suite 00:06:18.700 Test: subsystem_sort_test_depends_on_single ...passed 00:06:18.700 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:18.700 Test: subsystem_sort_test_missing_dependency ...[2024-06-11 05:58:49.210584] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:18.700 passed 00:06:18.700 00:06:18.700 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.700 suites 1 1 n/a 0 0 00:06:18.700 tests 3 3 3 0 0 00:06:18.700 asserts 20 20 20 0 n/a 00:06:18.700 00:06:18.700 Elapsed time = 0.001 seconds 00:06:18.700 [2024-06-11 05:58:49.210971] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:18.700 00:06:18.700 real 0m0.048s 00:06:18.700 user 0m0.036s 00:06:18.700 sys 0m0.012s 00:06:18.700 05:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.700 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.700 ************************************ 00:06:18.700 END TEST unittest_init 00:06:18.700 ************************************ 00:06:18.700 05:58:49 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:06:18.700 05:58:49 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:18.700 05:58:49 -- unit/unittest.sh@290 -- # hostname 00:06:18.700 05:58:49 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:18.957 geninfo: WARNING: invalid characters removed from testname! 00:06:45.549 05:59:14 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:48.078 05:59:18 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:50.611 05:59:21 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:53.146 05:59:23 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:55.680 05:59:26 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:58.209 05:59:28 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:00.820 05:59:31 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:03.350 05:59:33 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:03.350 05:59:33 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:04.318 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:04.318 Found 308 entries. 00:07:04.318 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:04.318 Writing .css and .png files. 00:07:04.318 Generating output. 00:07:04.318 Processing file include/linux/virtio_ring.h 00:07:04.318 Processing file include/spdk/bdev_module.h 00:07:04.318 Processing file include/spdk/nvme.h 00:07:04.318 Processing file include/spdk/nvmf_transport.h 00:07:04.318 Processing file include/spdk/util.h 00:07:04.318 Processing file include/spdk/endian.h 00:07:04.318 Processing file include/spdk/thread.h 00:07:04.318 Processing file include/spdk/trace.h 00:07:04.318 Processing file include/spdk/nvme_spec.h 00:07:04.318 Processing file include/spdk/base64.h 00:07:04.318 Processing file include/spdk/histogram_data.h 00:07:04.318 Processing file include/spdk/mmio.h 00:07:04.575 Processing file include/spdk_internal/virtio.h 00:07:04.575 Processing file include/spdk_internal/sgl.h 00:07:04.575 Processing file include/spdk_internal/rdma.h 00:07:04.575 Processing file include/spdk_internal/utf.h 00:07:04.575 Processing file include/spdk_internal/sock.h 00:07:04.575 Processing file include/spdk_internal/nvme_tcp.h 00:07:04.575 Processing file lib/accel/accel.c 00:07:04.575 Processing file lib/accel/accel_rpc.c 00:07:04.575 Processing file lib/accel/accel_sw.c 00:07:04.833 Processing file lib/bdev/bdev.c 00:07:04.833 Processing file lib/bdev/bdev_rpc.c 00:07:04.833 Processing file lib/bdev/part.c 00:07:04.833 Processing file lib/bdev/bdev_zone.c 00:07:04.833 Processing file lib/bdev/scsi_nvme.c 00:07:05.091 Processing file lib/blob/zeroes.c 00:07:05.091 Processing file lib/blob/blob_bs_dev.c 00:07:05.091 Processing file lib/blob/blobstore.c 00:07:05.091 Processing file lib/blob/blobstore.h 00:07:05.091 Processing file lib/blob/request.c 00:07:05.349 Processing file lib/blobfs/tree.c 00:07:05.349 Processing file lib/blobfs/blobfs.c 00:07:05.349 Processing file lib/conf/conf.c 00:07:05.349 Processing file lib/dma/dma.c 00:07:05.608 Processing file lib/env_dpdk/memory.c 00:07:05.608 Processing file lib/env_dpdk/threads.c 00:07:05.608 Processing file lib/env_dpdk/pci_idxd.c 00:07:05.608 Processing file lib/env_dpdk/pci_dpdk.c 00:07:05.608 Processing file lib/env_dpdk/pci_virtio.c 00:07:05.608 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:05.608 Processing file lib/env_dpdk/pci_vmd.c 00:07:05.608 Processing file lib/env_dpdk/pci_ioat.c 00:07:05.608 Processing file lib/env_dpdk/sigbus_handler.c 00:07:05.608 Processing file lib/env_dpdk/init.c 00:07:05.608 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:05.608 Processing file lib/env_dpdk/env.c 00:07:05.608 Processing file lib/env_dpdk/pci_event.c 00:07:05.608 Processing file lib/env_dpdk/pci.c 00:07:05.867 Processing file lib/event/reactor.c 00:07:05.867 Processing file lib/event/app.c 00:07:05.867 Processing file lib/event/log_rpc.c 00:07:05.867 Processing file lib/event/scheduler_static.c 00:07:05.867 Processing file lib/event/app_rpc.c 00:07:06.125 Processing file lib/ftl/ftl_init.c 00:07:06.125 Processing file lib/ftl/ftl_io.h 00:07:06.125 Processing file lib/ftl/ftl_debug.c 00:07:06.125 Processing file lib/ftl/ftl_reloc.c 00:07:06.125 Processing file lib/ftl/ftl_band_ops.c 00:07:06.125 Processing file lib/ftl/ftl_sb.c 00:07:06.125 Processing file lib/ftl/ftl_band.h 00:07:06.125 Processing file lib/ftl/ftl_p2l.c 00:07:06.125 Processing file lib/ftl/ftl_l2p_flat.c 00:07:06.125 Processing file lib/ftl/ftl_debug.h 00:07:06.125 Processing file lib/ftl/ftl_l2p_cache.c 00:07:06.125 Processing file lib/ftl/ftl_l2p.c 00:07:06.125 Processing file lib/ftl/ftl_layout.c 00:07:06.125 Processing file lib/ftl/ftl_writer.c 00:07:06.125 Processing file lib/ftl/ftl_writer.h 00:07:06.125 Processing file lib/ftl/ftl_nv_cache.c 00:07:06.125 Processing file lib/ftl/ftl_rq.c 00:07:06.125 Processing file lib/ftl/ftl_band.c 00:07:06.125 Processing file lib/ftl/ftl_core.c 00:07:06.125 Processing file lib/ftl/ftl_trace.c 00:07:06.125 Processing file lib/ftl/ftl_nv_cache.h 00:07:06.125 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:06.125 Processing file lib/ftl/ftl_core.h 00:07:06.125 Processing file lib/ftl/ftl_io.c 00:07:06.125 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:06.125 Processing file lib/ftl/base/ftl_base_dev.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:06.383 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:06.641 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:06.641 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:06.641 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:06.641 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:06.641 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:06.642 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:06.900 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:06.900 Processing file lib/ftl/utils/ftl_mempool.c 00:07:06.900 Processing file lib/ftl/utils/ftl_property.c 00:07:06.900 Processing file lib/ftl/utils/ftl_conf.c 00:07:06.900 Processing file lib/ftl/utils/ftl_md.c 00:07:06.900 Processing file lib/ftl/utils/ftl_property.h 00:07:06.900 Processing file lib/ftl/utils/ftl_df.h 00:07:06.900 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:06.900 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:06.900 Processing file lib/idxd/idxd.c 00:07:06.900 Processing file lib/idxd/idxd_internal.h 00:07:06.900 Processing file lib/idxd/idxd_user.c 00:07:06.900 Processing file lib/init/json_config.c 00:07:06.900 Processing file lib/init/subsystem_rpc.c 00:07:06.900 Processing file lib/init/rpc.c 00:07:06.900 Processing file lib/init/subsystem.c 00:07:07.159 Processing file lib/ioat/ioat_internal.h 00:07:07.159 Processing file lib/ioat/ioat.c 00:07:07.417 Processing file lib/iscsi/portal_grp.c 00:07:07.417 Processing file lib/iscsi/task.h 00:07:07.417 Processing file lib/iscsi/tgt_node.c 00:07:07.417 Processing file lib/iscsi/iscsi_subsystem.c 00:07:07.417 Processing file lib/iscsi/init_grp.c 00:07:07.417 Processing file lib/iscsi/iscsi_rpc.c 00:07:07.417 Processing file lib/iscsi/iscsi.h 00:07:07.418 Processing file lib/iscsi/iscsi.c 00:07:07.418 Processing file lib/iscsi/task.c 00:07:07.418 Processing file lib/iscsi/param.c 00:07:07.418 Processing file lib/iscsi/md5.c 00:07:07.418 Processing file lib/iscsi/conn.c 00:07:07.676 Processing file lib/json/json_parse.c 00:07:07.676 Processing file lib/json/json_write.c 00:07:07.676 Processing file lib/json/json_util.c 00:07:07.676 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:07.676 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:07.676 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:07.676 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:07.676 Processing file lib/log/log.c 00:07:07.676 Processing file lib/log/log_flags.c 00:07:07.676 Processing file lib/log/log_deprecated.c 00:07:07.934 Processing file lib/lvol/lvol.c 00:07:07.934 Processing file lib/nbd/nbd_rpc.c 00:07:07.934 Processing file lib/nbd/nbd.c 00:07:07.934 Processing file lib/notify/notify.c 00:07:07.934 Processing file lib/notify/notify_rpc.c 00:07:08.868 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:08.868 Processing file lib/nvme/nvme_fabric.c 00:07:08.868 Processing file lib/nvme/nvme_internal.h 00:07:08.868 Processing file lib/nvme/nvme_opal.c 00:07:08.868 Processing file lib/nvme/nvme_qpair.c 00:07:08.868 Processing file lib/nvme/nvme_ns_cmd.c 00:07:08.868 Processing file lib/nvme/nvme_pcie_common.c 00:07:08.868 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:08.868 Processing file lib/nvme/nvme_poll_group.c 00:07:08.868 Processing file lib/nvme/nvme.c 00:07:08.868 Processing file lib/nvme/nvme_tcp.c 00:07:08.868 Processing file lib/nvme/nvme_discovery.c 00:07:08.868 Processing file lib/nvme/nvme_cuse.c 00:07:08.868 Processing file lib/nvme/nvme_vfio_user.c 00:07:08.868 Processing file lib/nvme/nvme_rdma.c 00:07:08.868 Processing file lib/nvme/nvme_io_msg.c 00:07:08.868 Processing file lib/nvme/nvme_quirks.c 00:07:08.868 Processing file lib/nvme/nvme_transport.c 00:07:08.868 Processing file lib/nvme/nvme_ctrlr.c 00:07:08.868 Processing file lib/nvme/nvme_zns.c 00:07:08.868 Processing file lib/nvme/nvme_ns.c 00:07:08.868 Processing file lib/nvme/nvme_pcie_internal.h 00:07:08.868 Processing file lib/nvme/nvme_pcie.c 00:07:08.868 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:09.127 Processing file lib/nvmf/transport.c 00:07:09.127 Processing file lib/nvmf/nvmf_internal.h 00:07:09.127 Processing file lib/nvmf/subsystem.c 00:07:09.127 Processing file lib/nvmf/tcp.c 00:07:09.127 Processing file lib/nvmf/ctrlr_discovery.c 00:07:09.127 Processing file lib/nvmf/ctrlr.c 00:07:09.127 Processing file lib/nvmf/ctrlr_bdev.c 00:07:09.127 Processing file lib/nvmf/rdma.c 00:07:09.127 Processing file lib/nvmf/nvmf_rpc.c 00:07:09.127 Processing file lib/nvmf/nvmf.c 00:07:09.127 Processing file lib/rdma/rdma_verbs.c 00:07:09.127 Processing file lib/rdma/common.c 00:07:09.386 Processing file lib/rpc/rpc.c 00:07:09.386 Processing file lib/scsi/port.c 00:07:09.386 Processing file lib/scsi/scsi.c 00:07:09.386 Processing file lib/scsi/scsi_pr.c 00:07:09.386 Processing file lib/scsi/scsi_bdev.c 00:07:09.386 Processing file lib/scsi/lun.c 00:07:09.386 Processing file lib/scsi/dev.c 00:07:09.386 Processing file lib/scsi/task.c 00:07:09.386 Processing file lib/scsi/scsi_rpc.c 00:07:09.644 Processing file lib/sock/sock.c 00:07:09.644 Processing file lib/sock/sock_rpc.c 00:07:09.644 Processing file lib/thread/thread.c 00:07:09.644 Processing file lib/thread/iobuf.c 00:07:09.644 Processing file lib/trace/trace_rpc.c 00:07:09.644 Processing file lib/trace/trace_flags.c 00:07:09.644 Processing file lib/trace/trace.c 00:07:09.903 Processing file lib/trace_parser/trace.cpp 00:07:09.903 Processing file lib/ut/ut.c 00:07:09.903 Processing file lib/ut_mock/mock.c 00:07:10.162 Processing file lib/util/cpuset.c 00:07:10.162 Processing file lib/util/fd.c 00:07:10.162 Processing file lib/util/dif.c 00:07:10.162 Processing file lib/util/math.c 00:07:10.162 Processing file lib/util/iov.c 00:07:10.162 Processing file lib/util/zipf.c 00:07:10.162 Processing file lib/util/pipe.c 00:07:10.162 Processing file lib/util/crc64.c 00:07:10.162 Processing file lib/util/file.c 00:07:10.162 Processing file lib/util/hexlify.c 00:07:10.162 Processing file lib/util/crc16.c 00:07:10.162 Processing file lib/util/crc32_ieee.c 00:07:10.162 Processing file lib/util/string.c 00:07:10.162 Processing file lib/util/uuid.c 00:07:10.162 Processing file lib/util/xor.c 00:07:10.162 Processing file lib/util/crc32.c 00:07:10.162 Processing file lib/util/base64.c 00:07:10.162 Processing file lib/util/crc32c.c 00:07:10.162 Processing file lib/util/strerror_tls.c 00:07:10.162 Processing file lib/util/bit_array.c 00:07:10.162 Processing file lib/util/fd_group.c 00:07:10.162 Processing file lib/vfio_user/host/vfio_user.c 00:07:10.162 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:10.419 Processing file lib/vhost/vhost_rpc.c 00:07:10.420 Processing file lib/vhost/vhost_blk.c 00:07:10.420 Processing file lib/vhost/vhost_internal.h 00:07:10.420 Processing file lib/vhost/vhost.c 00:07:10.420 Processing file lib/vhost/rte_vhost_user.c 00:07:10.420 Processing file lib/vhost/vhost_scsi.c 00:07:10.678 Processing file lib/virtio/virtio_pci.c 00:07:10.678 Processing file lib/virtio/virtio.c 00:07:10.678 Processing file lib/virtio/virtio_vhost_user.c 00:07:10.678 Processing file lib/virtio/virtio_vfio_user.c 00:07:10.678 Processing file lib/vmd/vmd.c 00:07:10.678 Processing file lib/vmd/led.c 00:07:10.678 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:10.678 Processing file module/accel/dsa/accel_dsa.c 00:07:10.937 Processing file module/accel/error/accel_error.c 00:07:10.937 Processing file module/accel/error/accel_error_rpc.c 00:07:10.937 Processing file module/accel/iaa/accel_iaa.c 00:07:10.937 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:10.937 Processing file module/accel/ioat/accel_ioat.c 00:07:10.937 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:10.937 Processing file module/bdev/aio/bdev_aio.c 00:07:10.937 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:11.195 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:11.195 Processing file module/bdev/delay/vbdev_delay.c 00:07:11.195 Processing file module/bdev/error/vbdev_error.c 00:07:11.196 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:11.196 Processing file module/bdev/ftl/bdev_ftl.c 00:07:11.196 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:11.454 Processing file module/bdev/gpt/gpt.c 00:07:11.454 Processing file module/bdev/gpt/gpt.h 00:07:11.454 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:11.454 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:11.454 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:11.454 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:11.454 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:11.711 Processing file module/bdev/malloc/bdev_malloc.c 00:07:11.712 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:11.712 Processing file module/bdev/null/bdev_null.c 00:07:11.712 Processing file module/bdev/null/bdev_null_rpc.c 00:07:11.970 Processing file module/bdev/nvme/vbdev_opal.c 00:07:11.970 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:11.970 Processing file module/bdev/nvme/nvme_rpc.c 00:07:11.970 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:11.970 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:11.970 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:11.970 Processing file module/bdev/nvme/bdev_nvme.c 00:07:12.228 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:12.228 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:12.487 Processing file module/bdev/raid/raid1.c 00:07:12.487 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:12.487 Processing file module/bdev/raid/concat.c 00:07:12.487 Processing file module/bdev/raid/bdev_raid.c 00:07:12.487 Processing file module/bdev/raid/raid0.c 00:07:12.487 Processing file module/bdev/raid/bdev_raid.h 00:07:12.487 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:12.487 Processing file module/bdev/split/vbdev_split.c 00:07:12.487 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:12.487 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:12.487 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:12.487 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:12.746 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:12.746 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:12.746 Processing file module/blob/bdev/blob_bdev.c 00:07:12.746 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:12.746 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:13.005 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:13.005 Processing file module/event/subsystems/accel/accel.c 00:07:13.005 Processing file module/event/subsystems/bdev/bdev.c 00:07:13.005 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:13.005 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:13.264 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:13.264 Processing file module/event/subsystems/nbd/nbd.c 00:07:13.264 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:13.264 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:13.264 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:13.576 Processing file module/event/subsystems/scsi/scsi.c 00:07:13.576 Processing file module/event/subsystems/sock/sock.c 00:07:13.576 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:13.576 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:13.576 Processing file module/event/subsystems/vmd/vmd.c 00:07:13.576 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:13.854 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:13.854 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:13.854 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:13.854 Processing file module/sock/sock_kernel.h 00:07:14.113 Processing file module/sock/posix/posix.c 00:07:14.114 Writing directory view page. 00:07:14.114 Overall coverage rate: 00:07:14.114 lines......: 38.8% (38774 of 99805 lines) 00:07:14.114 functions..: 42.5% (3546 of 8335 functions) 00:07:14.114 00:07:14.114 00:07:14.114 ===================== 00:07:14.114 All unit tests passed 00:07:14.114 ===================== 00:07:14.114 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:14.114 05:59:44 -- unit/unittest.sh@302 -- # set +x 00:07:14.114 00:07:14.114 00:07:14.114 00:07:14.114 real 2m16.471s 00:07:14.114 user 1m50.868s 00:07:14.114 sys 0m16.863s 00:07:14.114 05:59:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.114 05:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.114 ************************************ 00:07:14.114 END TEST unittest 00:07:14.114 ************************************ 00:07:14.114 05:59:44 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:14.114 05:59:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:14.114 05:59:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:14.114 05:59:44 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:14.114 05:59:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:14.114 05:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.114 05:59:44 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:14.114 05:59:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.114 05:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.114 05:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.114 ************************************ 00:07:14.114 START TEST env 00:07:14.114 ************************************ 00:07:14.114 05:59:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:14.114 * Looking for test storage... 00:07:14.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:14.114 05:59:44 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:14.114 05:59:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.114 05:59:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.114 05:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.114 ************************************ 00:07:14.114 START TEST env_memory 00:07:14.114 ************************************ 00:07:14.114 05:59:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:14.422 00:07:14.422 00:07:14.422 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.422 http://cunit.sourceforge.net/ 00:07:14.422 00:07:14.422 00:07:14.422 Suite: memory 00:07:14.422 Test: alloc and free memory map ...[2024-06-11 05:59:44.811955] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:14.422 passed 00:07:14.422 Test: mem map translation ...[2024-06-11 05:59:44.866422] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:14.422 [2024-06-11 05:59:44.866569] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:14.422 [2024-06-11 05:59:44.866715] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:14.422 [2024-06-11 05:59:44.866819] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:14.422 passed 00:07:14.422 Test: mem map registration ...[2024-06-11 05:59:44.956122] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:14.422 [2024-06-11 05:59:44.956251] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:14.422 passed 00:07:14.682 Test: mem map adjacent registrations ...passed 00:07:14.682 00:07:14.682 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.682 suites 1 1 n/a 0 0 00:07:14.682 tests 4 4 4 0 0 00:07:14.682 asserts 152 152 152 0 n/a 00:07:14.682 00:07:14.682 Elapsed time = 0.313 seconds 00:07:14.682 00:07:14.682 real 0m0.356s 00:07:14.682 user 0m0.320s 00:07:14.682 sys 0m0.037s 00:07:14.682 05:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.682 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.682 ************************************ 00:07:14.682 END TEST env_memory 00:07:14.682 ************************************ 00:07:14.682 05:59:45 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:14.682 05:59:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.682 05:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.682 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.682 ************************************ 00:07:14.682 START TEST env_vtophys 00:07:14.682 ************************************ 00:07:14.682 05:59:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:14.682 EAL: lib.eal log level changed from notice to debug 00:07:14.682 EAL: Detected lcore 0 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 1 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 2 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 3 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 4 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 5 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 6 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 7 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 8 as core 0 on socket 0 00:07:14.682 EAL: Detected lcore 9 as core 0 on socket 0 00:07:14.682 EAL: Maximum logical cores by configuration: 128 00:07:14.682 EAL: Detected CPU lcores: 10 00:07:14.682 EAL: Detected NUMA nodes: 1 00:07:14.682 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:14.682 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:14.682 EAL: Checking presence of .so 'librte_eal.so' 00:07:14.682 EAL: Detected static linkage of DPDK 00:07:14.682 EAL: No shared files mode enabled, IPC will be disabled 00:07:14.682 EAL: Selected IOVA mode 'PA' 00:07:14.682 EAL: Probing VFIO support... 00:07:14.682 EAL: IOMMU type 1 (Type 1) is supported 00:07:14.682 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:14.682 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:14.682 EAL: VFIO support initialized 00:07:14.682 EAL: Ask a virtual area of 0x2e000 bytes 00:07:14.682 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:14.682 EAL: Setting up physically contiguous memory... 00:07:14.682 EAL: Setting maximum number of open files to 1048576 00:07:14.682 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:14.682 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:14.682 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.682 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:14.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.682 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.682 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:14.682 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:14.682 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.682 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:14.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.682 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.682 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:14.682 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:14.682 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.682 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:14.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.682 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.682 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:14.682 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:14.682 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.682 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:14.682 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.682 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.682 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:14.682 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:14.682 EAL: Hugepages will be freed exactly as allocated. 00:07:14.682 EAL: No shared files mode enabled, IPC is disabled 00:07:14.682 EAL: No shared files mode enabled, IPC is disabled 00:07:14.941 EAL: TSC frequency is ~2100000 KHz 00:07:14.941 EAL: Main lcore 0 is ready (tid=7fc23e640a80;cpuset=[0]) 00:07:14.941 EAL: Trying to obtain current memory policy. 00:07:14.941 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.941 EAL: Restoring previous memory policy: 0 00:07:14.941 EAL: request: mp_malloc_sync 00:07:14.941 EAL: No shared files mode enabled, IPC is disabled 00:07:14.941 EAL: Heap on socket 0 was expanded by 2MB 00:07:14.941 EAL: No shared files mode enabled, IPC is disabled 00:07:14.941 EAL: Mem event callback 'spdk:(nil)' registered 00:07:14.941 00:07:14.941 00:07:14.941 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.941 http://cunit.sourceforge.net/ 00:07:14.941 00:07:14.941 00:07:14.941 Suite: components_suite 00:07:15.521 Test: vtophys_malloc_test ...passed 00:07:15.521 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:15.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.521 EAL: Restoring previous memory policy: 0 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was expanded by 4MB 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was shrunk by 4MB 00:07:15.521 EAL: Trying to obtain current memory policy. 00:07:15.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.521 EAL: Restoring previous memory policy: 0 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was expanded by 6MB 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was shrunk by 6MB 00:07:15.521 EAL: Trying to obtain current memory policy. 00:07:15.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.521 EAL: Restoring previous memory policy: 0 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was expanded by 10MB 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was shrunk by 10MB 00:07:15.521 EAL: Trying to obtain current memory policy. 00:07:15.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.521 EAL: Restoring previous memory policy: 0 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was expanded by 18MB 00:07:15.521 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.521 EAL: request: mp_malloc_sync 00:07:15.521 EAL: No shared files mode enabled, IPC is disabled 00:07:15.521 EAL: Heap on socket 0 was shrunk by 18MB 00:07:15.521 EAL: Trying to obtain current memory policy. 00:07:15.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.779 EAL: Restoring previous memory policy: 0 00:07:15.779 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.779 EAL: request: mp_malloc_sync 00:07:15.779 EAL: No shared files mode enabled, IPC is disabled 00:07:15.779 EAL: Heap on socket 0 was expanded by 34MB 00:07:15.779 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.779 EAL: request: mp_malloc_sync 00:07:15.779 EAL: No shared files mode enabled, IPC is disabled 00:07:15.779 EAL: Heap on socket 0 was shrunk by 34MB 00:07:15.779 EAL: Trying to obtain current memory policy. 00:07:15.779 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.779 EAL: Restoring previous memory policy: 0 00:07:15.779 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.779 EAL: request: mp_malloc_sync 00:07:15.779 EAL: No shared files mode enabled, IPC is disabled 00:07:15.779 EAL: Heap on socket 0 was expanded by 66MB 00:07:16.038 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.038 EAL: request: mp_malloc_sync 00:07:16.038 EAL: No shared files mode enabled, IPC is disabled 00:07:16.038 EAL: Heap on socket 0 was shrunk by 66MB 00:07:16.038 EAL: Trying to obtain current memory policy. 00:07:16.038 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:16.038 EAL: Restoring previous memory policy: 0 00:07:16.038 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.038 EAL: request: mp_malloc_sync 00:07:16.038 EAL: No shared files mode enabled, IPC is disabled 00:07:16.038 EAL: Heap on socket 0 was expanded by 130MB 00:07:16.296 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.296 EAL: request: mp_malloc_sync 00:07:16.296 EAL: No shared files mode enabled, IPC is disabled 00:07:16.296 EAL: Heap on socket 0 was shrunk by 130MB 00:07:16.555 EAL: Trying to obtain current memory policy. 00:07:16.555 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:16.814 EAL: Restoring previous memory policy: 0 00:07:16.814 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.814 EAL: request: mp_malloc_sync 00:07:16.814 EAL: No shared files mode enabled, IPC is disabled 00:07:16.814 EAL: Heap on socket 0 was expanded by 258MB 00:07:17.380 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.380 EAL: request: mp_malloc_sync 00:07:17.380 EAL: No shared files mode enabled, IPC is disabled 00:07:17.380 EAL: Heap on socket 0 was shrunk by 258MB 00:07:17.639 EAL: Trying to obtain current memory policy. 00:07:17.639 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:17.898 EAL: Restoring previous memory policy: 0 00:07:17.898 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.898 EAL: request: mp_malloc_sync 00:07:17.898 EAL: No shared files mode enabled, IPC is disabled 00:07:17.898 EAL: Heap on socket 0 was expanded by 514MB 00:07:18.845 EAL: Calling mem event callback 'spdk:(nil)' 00:07:19.103 EAL: request: mp_malloc_sync 00:07:19.103 EAL: No shared files mode enabled, IPC is disabled 00:07:19.103 EAL: Heap on socket 0 was shrunk by 514MB 00:07:20.041 EAL: Trying to obtain current memory policy. 00:07:20.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:20.608 EAL: Restoring previous memory policy: 0 00:07:20.608 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.608 EAL: request: mp_malloc_sync 00:07:20.609 EAL: No shared files mode enabled, IPC is disabled 00:07:20.609 EAL: Heap on socket 0 was expanded by 1026MB 00:07:22.512 EAL: Calling mem event callback 'spdk:(nil)' 00:07:23.078 EAL: request: mp_malloc_sync 00:07:23.078 EAL: No shared files mode enabled, IPC is disabled 00:07:23.078 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:24.978 passed 00:07:24.978 00:07:24.978 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.978 suites 1 1 n/a 0 0 00:07:24.978 tests 2 2 2 0 0 00:07:24.978 asserts 6391 6391 6391 0 n/a 00:07:24.978 00:07:24.978 Elapsed time = 9.991 seconds 00:07:24.978 EAL: Calling mem event callback 'spdk:(nil)' 00:07:24.978 EAL: request: mp_malloc_sync 00:07:24.978 EAL: No shared files mode enabled, IPC is disabled 00:07:24.978 EAL: Heap on socket 0 was shrunk by 2MB 00:07:24.978 EAL: No shared files mode enabled, IPC is disabled 00:07:24.978 EAL: No shared files mode enabled, IPC is disabled 00:07:24.978 EAL: No shared files mode enabled, IPC is disabled 00:07:24.978 00:07:24.978 real 0m10.302s 00:07:24.978 user 0m8.780s 00:07:24.978 sys 0m1.393s 00:07:24.978 05:59:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.978 05:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:24.978 ************************************ 00:07:24.978 END TEST env_vtophys 00:07:24.978 ************************************ 00:07:24.978 05:59:55 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:24.978 05:59:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:24.978 05:59:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.978 05:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:24.978 ************************************ 00:07:24.978 START TEST env_pci 00:07:24.978 ************************************ 00:07:24.978 05:59:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:24.978 00:07:24.978 00:07:24.978 CUnit - A unit testing framework for C - Version 2.1-3 00:07:24.978 http://cunit.sourceforge.net/ 00:07:24.978 00:07:24.978 00:07:24.978 Suite: pci 00:07:24.978 Test: pci_hook ...[2024-06-11 05:59:55.581158] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102803 has claimed it 00:07:24.978 passed 00:07:24.978 00:07:24.978 EAL: Cannot find device (10000:00:01.0) 00:07:24.978 EAL: Failed to attach device on primary process 00:07:24.978 Run Summary: Type Total Ran Passed Failed Inactive 00:07:24.978 suites 1 1 n/a 0 0 00:07:24.978 tests 1 1 1 0 0 00:07:24.978 asserts 25 25 25 0 n/a 00:07:24.978 00:07:24.978 Elapsed time = 0.007 seconds 00:07:25.236 00:07:25.236 real 0m0.116s 00:07:25.236 user 0m0.062s 00:07:25.236 sys 0m0.055s 00:07:25.236 05:59:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.236 05:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.236 ************************************ 00:07:25.236 END TEST env_pci 00:07:25.236 ************************************ 00:07:25.236 05:59:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:25.236 05:59:55 -- env/env.sh@15 -- # uname 00:07:25.236 05:59:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:25.236 05:59:55 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:25.236 05:59:55 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:25.236 05:59:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:25.236 05:59:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.236 05:59:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.236 ************************************ 00:07:25.236 START TEST env_dpdk_post_init 00:07:25.236 ************************************ 00:07:25.236 05:59:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:25.236 EAL: Detected CPU lcores: 10 00:07:25.236 EAL: Detected NUMA nodes: 1 00:07:25.236 EAL: Detected static linkage of DPDK 00:07:25.236 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:25.236 EAL: Selected IOVA mode 'PA' 00:07:25.236 EAL: VFIO support initialized 00:07:25.495 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:25.495 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:25.495 Starting DPDK initialization... 00:07:25.495 Starting SPDK post initialization... 00:07:25.495 SPDK NVMe probe 00:07:25.495 Attaching to 0000:00:06.0 00:07:25.495 Attached to 0000:00:06.0 00:07:25.495 Cleaning up... 00:07:25.495 00:07:25.495 real 0m0.303s 00:07:25.495 user 0m0.083s 00:07:25.495 sys 0m0.122s 00:07:25.495 05:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.495 ************************************ 00:07:25.495 END TEST env_dpdk_post_init 00:07:25.495 ************************************ 00:07:25.495 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:25.495 05:59:56 -- env/env.sh@26 -- # uname 00:07:25.495 05:59:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:25.495 05:59:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:25.495 05:59:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:25.495 05:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.495 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:25.495 ************************************ 00:07:25.495 START TEST env_mem_callbacks 00:07:25.495 ************************************ 00:07:25.495 05:59:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:25.495 EAL: Detected CPU lcores: 10 00:07:25.495 EAL: Detected NUMA nodes: 1 00:07:25.495 EAL: Detected static linkage of DPDK 00:07:25.753 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:25.753 EAL: Selected IOVA mode 'PA' 00:07:25.753 EAL: VFIO support initialized 00:07:25.753 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:25.753 00:07:25.753 00:07:25.753 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.753 http://cunit.sourceforge.net/ 00:07:25.753 00:07:25.753 00:07:25.753 Suite: memory 00:07:25.753 Test: test ... 00:07:25.753 register 0x200000200000 2097152 00:07:25.753 malloc 3145728 00:07:25.753 register 0x200000400000 4194304 00:07:25.753 buf 0x2000004fffc0 len 3145728 PASSED 00:07:25.753 malloc 64 00:07:25.753 buf 0x2000004ffec0 len 64 PASSED 00:07:25.753 malloc 4194304 00:07:25.753 register 0x200000800000 6291456 00:07:25.753 buf 0x2000009fffc0 len 4194304 PASSED 00:07:25.753 free 0x2000004fffc0 3145728 00:07:25.753 free 0x2000004ffec0 64 00:07:25.753 unregister 0x200000400000 4194304 PASSED 00:07:25.753 free 0x2000009fffc0 4194304 00:07:25.753 unregister 0x200000800000 6291456 PASSED 00:07:25.753 malloc 8388608 00:07:25.753 register 0x200000400000 10485760 00:07:25.753 buf 0x2000005fffc0 len 8388608 PASSED 00:07:25.753 free 0x2000005fffc0 8388608 00:07:25.753 unregister 0x200000400000 10485760 PASSED 00:07:25.753 passed 00:07:25.753 00:07:25.753 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.753 suites 1 1 n/a 0 0 00:07:25.753 tests 1 1 1 0 0 00:07:25.753 asserts 15 15 15 0 n/a 00:07:25.753 00:07:25.753 Elapsed time = 0.109 seconds 00:07:26.012 00:07:26.012 real 0m0.358s 00:07:26.012 user 0m0.165s 00:07:26.012 sys 0m0.093s 00:07:26.012 05:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.012 ************************************ 00:07:26.012 END TEST env_mem_callbacks 00:07:26.012 ************************************ 00:07:26.012 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.012 00:07:26.012 real 0m11.851s 00:07:26.012 user 0m9.609s 00:07:26.012 sys 0m1.933s 00:07:26.012 05:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.012 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.012 ************************************ 00:07:26.012 END TEST env 00:07:26.012 ************************************ 00:07:26.012 05:59:56 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:26.012 05:59:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:26.012 05:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.012 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.012 ************************************ 00:07:26.012 START TEST rpc 00:07:26.012 ************************************ 00:07:26.012 05:59:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:26.012 * Looking for test storage... 00:07:26.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:26.270 05:59:56 -- rpc/rpc.sh@65 -- # spdk_pid=102933 00:07:26.270 05:59:56 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:26.270 05:59:56 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:26.270 05:59:56 -- rpc/rpc.sh@67 -- # waitforlisten 102933 00:07:26.270 05:59:56 -- common/autotest_common.sh@819 -- # '[' -z 102933 ']' 00:07:26.270 05:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.270 05:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:26.270 05:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.270 05:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:26.270 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.270 [2024-06-11 05:59:56.786304] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:26.270 [2024-06-11 05:59:56.786539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102933 ] 00:07:26.529 [2024-06-11 05:59:56.968252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.788 [2024-06-11 05:59:57.243368] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:26.788 [2024-06-11 05:59:57.243594] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:26.788 [2024-06-11 05:59:57.243629] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 102933' to capture a snapshot of events at runtime. 00:07:26.788 [2024-06-11 05:59:57.243667] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid102933 for offline analysis/debug. 00:07:26.788 [2024-06-11 05:59:57.243762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.214 05:59:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:28.214 05:59:58 -- common/autotest_common.sh@852 -- # return 0 00:07:28.214 05:59:58 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:28.214 05:59:58 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:28.214 05:59:58 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:28.214 05:59:58 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:28.214 05:59:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.214 05:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.214 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.214 ************************************ 00:07:28.214 START TEST rpc_integrity 00:07:28.214 ************************************ 00:07:28.214 05:59:58 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:28.214 05:59:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:28.214 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.214 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.214 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.214 05:59:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:28.214 05:59:58 -- rpc/rpc.sh@13 -- # jq length 00:07:28.214 05:59:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:28.214 05:59:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:28.214 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.214 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.214 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.214 05:59:58 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:28.214 05:59:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:28.214 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.214 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.214 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.214 05:59:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:28.214 { 00:07:28.214 "name": "Malloc0", 00:07:28.214 "aliases": [ 00:07:28.214 "bcfed256-336a-4997-ac8e-5b7ac708dedd" 00:07:28.214 ], 00:07:28.214 "product_name": "Malloc disk", 00:07:28.214 "block_size": 512, 00:07:28.214 "num_blocks": 16384, 00:07:28.214 "uuid": "bcfed256-336a-4997-ac8e-5b7ac708dedd", 00:07:28.214 "assigned_rate_limits": { 00:07:28.214 "rw_ios_per_sec": 0, 00:07:28.214 "rw_mbytes_per_sec": 0, 00:07:28.214 "r_mbytes_per_sec": 0, 00:07:28.214 "w_mbytes_per_sec": 0 00:07:28.214 }, 00:07:28.214 "claimed": false, 00:07:28.214 "zoned": false, 00:07:28.214 "supported_io_types": { 00:07:28.214 "read": true, 00:07:28.214 "write": true, 00:07:28.214 "unmap": true, 00:07:28.214 "write_zeroes": true, 00:07:28.214 "flush": true, 00:07:28.214 "reset": true, 00:07:28.214 "compare": false, 00:07:28.214 "compare_and_write": false, 00:07:28.214 "abort": true, 00:07:28.214 "nvme_admin": false, 00:07:28.214 "nvme_io": false 00:07:28.214 }, 00:07:28.214 "memory_domains": [ 00:07:28.214 { 00:07:28.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.214 "dma_device_type": 2 00:07:28.214 } 00:07:28.214 ], 00:07:28.214 "driver_specific": {} 00:07:28.214 } 00:07:28.214 ]' 00:07:28.214 05:59:58 -- rpc/rpc.sh@17 -- # jq length 00:07:28.214 05:59:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:28.214 05:59:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:28.214 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.214 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.214 [2024-06-11 05:59:58.567414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:28.214 [2024-06-11 05:59:58.567505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.214 [2024-06-11 05:59:58.567547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:07:28.214 [2024-06-11 05:59:58.567570] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.214 [2024-06-11 05:59:58.570223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.214 [2024-06-11 05:59:58.570305] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:28.214 Passthru0 00:07:28.214 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.214 05:59:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:28.214 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.214 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.214 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.214 05:59:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:28.214 { 00:07:28.214 "name": "Malloc0", 00:07:28.214 "aliases": [ 00:07:28.214 "bcfed256-336a-4997-ac8e-5b7ac708dedd" 00:07:28.214 ], 00:07:28.214 "product_name": "Malloc disk", 00:07:28.214 "block_size": 512, 00:07:28.214 "num_blocks": 16384, 00:07:28.214 "uuid": "bcfed256-336a-4997-ac8e-5b7ac708dedd", 00:07:28.214 "assigned_rate_limits": { 00:07:28.214 "rw_ios_per_sec": 0, 00:07:28.214 "rw_mbytes_per_sec": 0, 00:07:28.214 "r_mbytes_per_sec": 0, 00:07:28.214 "w_mbytes_per_sec": 0 00:07:28.214 }, 00:07:28.214 "claimed": true, 00:07:28.214 "claim_type": "exclusive_write", 00:07:28.214 "zoned": false, 00:07:28.214 "supported_io_types": { 00:07:28.214 "read": true, 00:07:28.214 "write": true, 00:07:28.214 "unmap": true, 00:07:28.214 "write_zeroes": true, 00:07:28.214 "flush": true, 00:07:28.214 "reset": true, 00:07:28.214 "compare": false, 00:07:28.214 "compare_and_write": false, 00:07:28.214 "abort": true, 00:07:28.214 "nvme_admin": false, 00:07:28.214 "nvme_io": false 00:07:28.214 }, 00:07:28.214 "memory_domains": [ 00:07:28.214 { 00:07:28.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.214 "dma_device_type": 2 00:07:28.214 } 00:07:28.214 ], 00:07:28.214 "driver_specific": {} 00:07:28.214 }, 00:07:28.214 { 00:07:28.214 "name": "Passthru0", 00:07:28.214 "aliases": [ 00:07:28.214 "89cf0e4a-996e-5453-9650-c74e0dc05fcb" 00:07:28.214 ], 00:07:28.214 "product_name": "passthru", 00:07:28.214 "block_size": 512, 00:07:28.214 "num_blocks": 16384, 00:07:28.214 "uuid": "89cf0e4a-996e-5453-9650-c74e0dc05fcb", 00:07:28.214 "assigned_rate_limits": { 00:07:28.214 "rw_ios_per_sec": 0, 00:07:28.214 "rw_mbytes_per_sec": 0, 00:07:28.214 "r_mbytes_per_sec": 0, 00:07:28.214 "w_mbytes_per_sec": 0 00:07:28.214 }, 00:07:28.214 "claimed": false, 00:07:28.214 "zoned": false, 00:07:28.214 "supported_io_types": { 00:07:28.214 "read": true, 00:07:28.214 "write": true, 00:07:28.214 "unmap": true, 00:07:28.214 "write_zeroes": true, 00:07:28.214 "flush": true, 00:07:28.214 "reset": true, 00:07:28.214 "compare": false, 00:07:28.214 "compare_and_write": false, 00:07:28.214 "abort": true, 00:07:28.214 "nvme_admin": false, 00:07:28.214 "nvme_io": false 00:07:28.214 }, 00:07:28.214 "memory_domains": [ 00:07:28.214 { 00:07:28.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.215 "dma_device_type": 2 00:07:28.215 } 00:07:28.215 ], 00:07:28.215 "driver_specific": { 00:07:28.215 "passthru": { 00:07:28.215 "name": "Passthru0", 00:07:28.215 "base_bdev_name": "Malloc0" 00:07:28.215 } 00:07:28.215 } 00:07:28.215 } 00:07:28.215 ]' 00:07:28.215 05:59:58 -- rpc/rpc.sh@21 -- # jq length 00:07:28.215 05:59:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:28.215 05:59:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:28.215 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.215 05:59:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:28.215 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.215 05:59:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:28.215 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.215 05:59:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:28.215 05:59:58 -- rpc/rpc.sh@26 -- # jq length 00:07:28.215 05:59:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:28.215 00:07:28.215 real 0m0.296s 00:07:28.215 user 0m0.165s 00:07:28.215 sys 0m0.022s 00:07:28.215 05:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 ************************************ 00:07:28.215 END TEST rpc_integrity 00:07:28.215 ************************************ 00:07:28.215 05:59:58 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:28.215 05:59:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.215 05:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 ************************************ 00:07:28.215 START TEST rpc_plugins 00:07:28.215 ************************************ 00:07:28.215 05:59:58 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:07:28.215 05:59:58 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:28.215 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.215 05:59:58 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:28.215 05:59:58 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:28.215 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.215 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.215 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.215 05:59:58 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:28.215 { 00:07:28.215 "name": "Malloc1", 00:07:28.215 "aliases": [ 00:07:28.215 "89fc5fb6-e416-4be2-ad09-fcb4c1b7d3db" 00:07:28.215 ], 00:07:28.215 "product_name": "Malloc disk", 00:07:28.215 "block_size": 4096, 00:07:28.215 "num_blocks": 256, 00:07:28.215 "uuid": "89fc5fb6-e416-4be2-ad09-fcb4c1b7d3db", 00:07:28.215 "assigned_rate_limits": { 00:07:28.215 "rw_ios_per_sec": 0, 00:07:28.215 "rw_mbytes_per_sec": 0, 00:07:28.215 "r_mbytes_per_sec": 0, 00:07:28.215 "w_mbytes_per_sec": 0 00:07:28.215 }, 00:07:28.215 "claimed": false, 00:07:28.215 "zoned": false, 00:07:28.215 "supported_io_types": { 00:07:28.215 "read": true, 00:07:28.215 "write": true, 00:07:28.215 "unmap": true, 00:07:28.215 "write_zeroes": true, 00:07:28.215 "flush": true, 00:07:28.215 "reset": true, 00:07:28.215 "compare": false, 00:07:28.215 "compare_and_write": false, 00:07:28.215 "abort": true, 00:07:28.215 "nvme_admin": false, 00:07:28.215 "nvme_io": false 00:07:28.215 }, 00:07:28.215 "memory_domains": [ 00:07:28.215 { 00:07:28.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.215 "dma_device_type": 2 00:07:28.215 } 00:07:28.215 ], 00:07:28.215 "driver_specific": {} 00:07:28.215 } 00:07:28.215 ]' 00:07:28.215 05:59:58 -- rpc/rpc.sh@32 -- # jq length 00:07:28.474 05:59:58 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:28.474 05:59:58 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:28.474 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.474 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.474 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.474 05:59:58 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:28.474 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.474 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.474 05:59:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.474 05:59:58 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:28.474 05:59:58 -- rpc/rpc.sh@36 -- # jq length 00:07:28.474 05:59:58 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:28.474 00:07:28.474 real 0m0.141s 00:07:28.474 user 0m0.080s 00:07:28.474 sys 0m0.018s 00:07:28.474 05:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.474 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.474 ************************************ 00:07:28.474 END TEST rpc_plugins 00:07:28.474 ************************************ 00:07:28.474 05:59:58 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:28.474 05:59:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.474 05:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.474 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.474 ************************************ 00:07:28.474 START TEST rpc_trace_cmd_test 00:07:28.474 ************************************ 00:07:28.474 05:59:58 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:07:28.474 05:59:58 -- rpc/rpc.sh@40 -- # local info 00:07:28.474 05:59:58 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:28.474 05:59:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.474 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:07:28.474 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.474 05:59:59 -- rpc/rpc.sh@42 -- # info='{ 00:07:28.474 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid102933", 00:07:28.474 "tpoint_group_mask": "0x8", 00:07:28.474 "iscsi_conn": { 00:07:28.474 "mask": "0x2", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "scsi": { 00:07:28.474 "mask": "0x4", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "bdev": { 00:07:28.474 "mask": "0x8", 00:07:28.474 "tpoint_mask": "0xffffffffffffffff" 00:07:28.474 }, 00:07:28.474 "nvmf_rdma": { 00:07:28.474 "mask": "0x10", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "nvmf_tcp": { 00:07:28.474 "mask": "0x20", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "ftl": { 00:07:28.474 "mask": "0x40", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "blobfs": { 00:07:28.474 "mask": "0x80", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "dsa": { 00:07:28.474 "mask": "0x200", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "thread": { 00:07:28.474 "mask": "0x400", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "nvme_pcie": { 00:07:28.474 "mask": "0x800", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "iaa": { 00:07:28.474 "mask": "0x1000", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "nvme_tcp": { 00:07:28.474 "mask": "0x2000", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 }, 00:07:28.474 "bdev_nvme": { 00:07:28.474 "mask": "0x4000", 00:07:28.474 "tpoint_mask": "0x0" 00:07:28.474 } 00:07:28.474 }' 00:07:28.474 05:59:59 -- rpc/rpc.sh@43 -- # jq length 00:07:28.474 05:59:59 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:28.474 05:59:59 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:28.474 05:59:59 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:28.474 05:59:59 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:28.733 05:59:59 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:28.734 05:59:59 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:28.734 05:59:59 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:28.734 05:59:59 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:28.734 05:59:59 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:28.734 00:07:28.734 real 0m0.231s 00:07:28.734 user 0m0.189s 00:07:28.734 sys 0m0.035s 00:07:28.734 05:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.734 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 ************************************ 00:07:28.734 END TEST rpc_trace_cmd_test 00:07:28.734 ************************************ 00:07:28.734 05:59:59 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:28.734 05:59:59 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:28.734 05:59:59 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:28.734 05:59:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:28.734 05:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.734 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 ************************************ 00:07:28.734 START TEST rpc_daemon_integrity 00:07:28.734 ************************************ 00:07:28.734 05:59:59 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:07:28.734 05:59:59 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:28.734 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.734 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.734 05:59:59 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:28.734 05:59:59 -- rpc/rpc.sh@13 -- # jq length 00:07:28.734 05:59:59 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:28.734 05:59:59 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:28.734 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.734 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.734 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.734 05:59:59 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:28.993 05:59:59 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:28.993 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.993 05:59:59 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:28.993 { 00:07:28.993 "name": "Malloc2", 00:07:28.993 "aliases": [ 00:07:28.993 "d317f729-6984-4a89-b99d-412421b4afec" 00:07:28.993 ], 00:07:28.993 "product_name": "Malloc disk", 00:07:28.993 "block_size": 512, 00:07:28.993 "num_blocks": 16384, 00:07:28.993 "uuid": "d317f729-6984-4a89-b99d-412421b4afec", 00:07:28.993 "assigned_rate_limits": { 00:07:28.993 "rw_ios_per_sec": 0, 00:07:28.993 "rw_mbytes_per_sec": 0, 00:07:28.993 "r_mbytes_per_sec": 0, 00:07:28.993 "w_mbytes_per_sec": 0 00:07:28.993 }, 00:07:28.993 "claimed": false, 00:07:28.993 "zoned": false, 00:07:28.993 "supported_io_types": { 00:07:28.993 "read": true, 00:07:28.993 "write": true, 00:07:28.993 "unmap": true, 00:07:28.993 "write_zeroes": true, 00:07:28.993 "flush": true, 00:07:28.993 "reset": true, 00:07:28.993 "compare": false, 00:07:28.993 "compare_and_write": false, 00:07:28.993 "abort": true, 00:07:28.993 "nvme_admin": false, 00:07:28.993 "nvme_io": false 00:07:28.993 }, 00:07:28.993 "memory_domains": [ 00:07:28.993 { 00:07:28.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.993 "dma_device_type": 2 00:07:28.993 } 00:07:28.993 ], 00:07:28.993 "driver_specific": {} 00:07:28.993 } 00:07:28.993 ]' 00:07:28.993 05:59:59 -- rpc/rpc.sh@17 -- # jq length 00:07:28.993 05:59:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:28.993 05:59:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:28.993 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 [2024-06-11 05:59:59.449899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:28.993 [2024-06-11 05:59:59.449978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.993 [2024-06-11 05:59:59.450020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:28.993 [2024-06-11 05:59:59.450043] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.993 [2024-06-11 05:59:59.452720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.993 [2024-06-11 05:59:59.452789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:28.993 Passthru0 00:07:28.993 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.993 05:59:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:28.993 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.993 05:59:59 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:28.993 { 00:07:28.993 "name": "Malloc2", 00:07:28.993 "aliases": [ 00:07:28.993 "d317f729-6984-4a89-b99d-412421b4afec" 00:07:28.993 ], 00:07:28.993 "product_name": "Malloc disk", 00:07:28.993 "block_size": 512, 00:07:28.993 "num_blocks": 16384, 00:07:28.993 "uuid": "d317f729-6984-4a89-b99d-412421b4afec", 00:07:28.993 "assigned_rate_limits": { 00:07:28.993 "rw_ios_per_sec": 0, 00:07:28.993 "rw_mbytes_per_sec": 0, 00:07:28.993 "r_mbytes_per_sec": 0, 00:07:28.993 "w_mbytes_per_sec": 0 00:07:28.993 }, 00:07:28.993 "claimed": true, 00:07:28.993 "claim_type": "exclusive_write", 00:07:28.993 "zoned": false, 00:07:28.993 "supported_io_types": { 00:07:28.993 "read": true, 00:07:28.993 "write": true, 00:07:28.993 "unmap": true, 00:07:28.993 "write_zeroes": true, 00:07:28.993 "flush": true, 00:07:28.993 "reset": true, 00:07:28.993 "compare": false, 00:07:28.993 "compare_and_write": false, 00:07:28.993 "abort": true, 00:07:28.993 "nvme_admin": false, 00:07:28.993 "nvme_io": false 00:07:28.993 }, 00:07:28.993 "memory_domains": [ 00:07:28.993 { 00:07:28.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.993 "dma_device_type": 2 00:07:28.993 } 00:07:28.993 ], 00:07:28.993 "driver_specific": {} 00:07:28.993 }, 00:07:28.993 { 00:07:28.993 "name": "Passthru0", 00:07:28.993 "aliases": [ 00:07:28.993 "183c8a53-717a-562f-a5fa-6fcaf2cfd0c9" 00:07:28.993 ], 00:07:28.993 "product_name": "passthru", 00:07:28.993 "block_size": 512, 00:07:28.993 "num_blocks": 16384, 00:07:28.993 "uuid": "183c8a53-717a-562f-a5fa-6fcaf2cfd0c9", 00:07:28.993 "assigned_rate_limits": { 00:07:28.993 "rw_ios_per_sec": 0, 00:07:28.993 "rw_mbytes_per_sec": 0, 00:07:28.993 "r_mbytes_per_sec": 0, 00:07:28.993 "w_mbytes_per_sec": 0 00:07:28.993 }, 00:07:28.993 "claimed": false, 00:07:28.993 "zoned": false, 00:07:28.993 "supported_io_types": { 00:07:28.993 "read": true, 00:07:28.993 "write": true, 00:07:28.993 "unmap": true, 00:07:28.993 "write_zeroes": true, 00:07:28.993 "flush": true, 00:07:28.993 "reset": true, 00:07:28.993 "compare": false, 00:07:28.993 "compare_and_write": false, 00:07:28.993 "abort": true, 00:07:28.993 "nvme_admin": false, 00:07:28.993 "nvme_io": false 00:07:28.993 }, 00:07:28.993 "memory_domains": [ 00:07:28.993 { 00:07:28.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.993 "dma_device_type": 2 00:07:28.993 } 00:07:28.993 ], 00:07:28.993 "driver_specific": { 00:07:28.993 "passthru": { 00:07:28.993 "name": "Passthru0", 00:07:28.993 "base_bdev_name": "Malloc2" 00:07:28.993 } 00:07:28.993 } 00:07:28.993 } 00:07:28.993 ]' 00:07:28.993 05:59:59 -- rpc/rpc.sh@21 -- # jq length 00:07:28.993 05:59:59 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:28.993 05:59:59 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:28.993 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.993 05:59:59 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:28.993 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.993 05:59:59 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:28.993 05:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 05:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:28.993 05:59:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:28.993 05:59:59 -- rpc/rpc.sh@26 -- # jq length 00:07:28.993 05:59:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:28.993 00:07:28.993 real 0m0.312s 00:07:28.993 user 0m0.160s 00:07:28.993 sys 0m0.042s 00:07:28.993 05:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.993 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.993 ************************************ 00:07:28.993 END TEST rpc_daemon_integrity 00:07:28.993 ************************************ 00:07:29.252 05:59:59 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:29.252 05:59:59 -- rpc/rpc.sh@84 -- # killprocess 102933 00:07:29.252 05:59:59 -- common/autotest_common.sh@926 -- # '[' -z 102933 ']' 00:07:29.252 05:59:59 -- common/autotest_common.sh@930 -- # kill -0 102933 00:07:29.252 05:59:59 -- common/autotest_common.sh@931 -- # uname 00:07:29.252 05:59:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:29.252 05:59:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102933 00:07:29.252 05:59:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:29.252 05:59:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:29.252 05:59:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102933' 00:07:29.252 killing process with pid 102933 00:07:29.252 05:59:59 -- common/autotest_common.sh@945 -- # kill 102933 00:07:29.252 05:59:59 -- common/autotest_common.sh@950 -- # wait 102933 00:07:31.784 00:07:31.784 real 0m5.829s 00:07:31.784 user 0m6.490s 00:07:31.784 sys 0m0.983s 00:07:31.784 ************************************ 00:07:31.784 END TEST rpc 00:07:31.784 ************************************ 00:07:31.784 06:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.784 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.042 06:00:02 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:32.042 06:00:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.042 06:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.042 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.042 ************************************ 00:07:32.042 START TEST rpc_client 00:07:32.042 ************************************ 00:07:32.042 06:00:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:32.042 * Looking for test storage... 00:07:32.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:32.042 06:00:02 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:32.042 OK 00:07:32.042 06:00:02 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:32.042 00:07:32.042 real 0m0.195s 00:07:32.042 user 0m0.100s 00:07:32.042 sys 0m0.109s 00:07:32.042 06:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.042 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.042 ************************************ 00:07:32.042 END TEST rpc_client 00:07:32.042 ************************************ 00:07:32.303 06:00:02 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:32.303 06:00:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:32.303 06:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.303 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.303 ************************************ 00:07:32.303 START TEST json_config 00:07:32.303 ************************************ 00:07:32.303 06:00:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:32.303 06:00:02 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:32.303 06:00:02 -- nvmf/common.sh@7 -- # uname -s 00:07:32.303 06:00:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:32.303 06:00:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:32.303 06:00:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:32.303 06:00:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:32.303 06:00:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:32.303 06:00:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:32.303 06:00:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:32.303 06:00:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:32.303 06:00:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:32.303 06:00:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:32.303 06:00:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9a1cfe4-b507-4378-8d43-b1c0ea20dbf3 00:07:32.303 06:00:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9a1cfe4-b507-4378-8d43-b1c0ea20dbf3 00:07:32.303 06:00:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.303 06:00:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:32.303 06:00:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:32.303 06:00:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.303 06:00:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.303 06:00:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.303 06:00:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.303 06:00:02 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:32.303 06:00:02 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:32.303 06:00:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:32.303 06:00:02 -- paths/export.sh@5 -- # export PATH 00:07:32.303 06:00:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:32.303 06:00:02 -- nvmf/common.sh@46 -- # : 0 00:07:32.303 06:00:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:32.303 06:00:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:32.303 06:00:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:32.303 06:00:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.303 06:00:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.303 06:00:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:32.303 06:00:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:32.303 06:00:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:32.303 06:00:02 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:32.303 06:00:02 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:32.303 06:00:02 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:32.303 06:00:02 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:32.303 06:00:02 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:07:32.303 06:00:02 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:32.303 06:00:02 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:32.303 06:00:02 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:32.303 06:00:02 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:32.303 06:00:02 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:32.303 06:00:02 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:32.303 06:00:02 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:32.303 06:00:02 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:32.303 06:00:02 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:32.303 INFO: JSON configuration test init 00:07:32.303 06:00:02 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:32.303 06:00:02 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:32.303 06:00:02 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:32.303 06:00:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:32.303 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.303 06:00:02 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:32.303 06:00:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:32.303 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.304 06:00:02 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:32.304 06:00:02 -- json_config/json_config.sh@98 -- # local app=target 00:07:32.304 06:00:02 -- json_config/json_config.sh@99 -- # shift 00:07:32.304 06:00:02 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:32.304 06:00:02 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:32.304 06:00:02 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:32.304 06:00:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:32.304 06:00:02 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:32.304 06:00:02 -- json_config/json_config.sh@111 -- # app_pid[$app]=103234 00:07:32.304 Waiting for target to run... 00:07:32.304 06:00:02 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:32.304 06:00:02 -- json_config/json_config.sh@114 -- # waitforlisten 103234 /var/tmp/spdk_tgt.sock 00:07:32.304 06:00:02 -- common/autotest_common.sh@819 -- # '[' -z 103234 ']' 00:07:32.304 06:00:02 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:32.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:32.304 06:00:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:32.304 06:00:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:32.304 06:00:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:32.304 06:00:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:32.304 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.304 [2024-06-11 06:00:02.914564] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:32.304 [2024-06-11 06:00:02.915729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103234 ] 00:07:33.238 [2024-06-11 06:00:03.535975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.238 [2024-06-11 06:00:03.818985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.238 [2024-06-11 06:00:03.819285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.496 06:00:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:33.496 00:07:33.496 06:00:03 -- common/autotest_common.sh@852 -- # return 0 00:07:33.496 06:00:03 -- json_config/json_config.sh@115 -- # echo '' 00:07:33.496 06:00:03 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:33.496 06:00:03 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:33.496 06:00:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:33.496 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.496 06:00:03 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:33.496 06:00:03 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:33.496 06:00:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:33.496 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.496 06:00:03 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:33.496 06:00:03 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:33.496 06:00:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:34.436 06:00:05 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:34.436 06:00:05 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:34.436 06:00:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.436 06:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.436 06:00:05 -- json_config/json_config.sh@48 -- # local ret=0 00:07:34.436 06:00:05 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:34.436 06:00:05 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:34.436 06:00:05 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:34.436 06:00:05 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:34.436 06:00:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:34.694 06:00:05 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:34.694 06:00:05 -- json_config/json_config.sh@51 -- # local get_types 00:07:34.694 06:00:05 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:34.694 06:00:05 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:34.694 06:00:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:34.694 06:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 06:00:05 -- json_config/json_config.sh@58 -- # return 0 00:07:34.977 06:00:05 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:34.977 06:00:05 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:34.977 06:00:05 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:34.977 06:00:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.977 06:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.977 06:00:05 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:34.977 06:00:05 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:34.977 06:00:05 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:34.977 06:00:05 -- json_config/json_config.sh@164 -- # get_notifications 00:07:34.977 06:00:05 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:34.977 06:00:05 -- json_config/json_config.sh@64 -- # IFS=: 00:07:34.977 06:00:05 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:34.977 06:00:05 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:34.977 06:00:05 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:34.977 06:00:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:35.236 06:00:05 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:35.236 06:00:05 -- json_config/json_config.sh@64 -- # IFS=: 00:07:35.236 06:00:05 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:35.236 06:00:05 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:35.236 06:00:05 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:35.236 06:00:05 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:35.236 06:00:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:35.494 Nvme0n1p0 Nvme0n1p1 00:07:35.494 06:00:05 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:35.494 06:00:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:35.753 [2024-06-11 06:00:06.264241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:35.753 [2024-06-11 06:00:06.264367] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:35.753 00:07:35.753 06:00:06 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:35.753 06:00:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:36.011 Malloc3 00:07:36.011 06:00:06 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:36.011 06:00:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:36.268 [2024-06-11 06:00:06.851803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:36.268 [2024-06-11 06:00:06.851948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.268 [2024-06-11 06:00:06.851996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:36.268 [2024-06-11 06:00:06.852040] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.268 [2024-06-11 06:00:06.855826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.268 [2024-06-11 06:00:06.855895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:36.268 PTBdevFromMalloc3 00:07:36.268 06:00:06 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:36.269 06:00:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:36.526 Null0 00:07:36.526 06:00:07 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:36.526 06:00:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:37.092 Malloc0 00:07:37.092 06:00:07 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:37.092 06:00:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:37.350 Malloc1 00:07:37.350 06:00:07 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:37.350 06:00:07 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:37.917 102400+0 records in 00:07:37.917 102400+0 records out 00:07:37.917 104857600 bytes (105 MB, 100 MiB) copied, 0.568336 s, 184 MB/s 00:07:37.917 06:00:08 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:37.917 06:00:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:38.174 aio_disk 00:07:38.174 06:00:08 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:38.174 06:00:08 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:38.174 06:00:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:38.432 73e5d9a3-e01b-4286-a0d2-cfecdb220901 00:07:38.432 06:00:08 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:38.432 06:00:08 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:38.432 06:00:08 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:38.688 06:00:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:38.688 06:00:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:39.253 06:00:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:39.253 06:00:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:39.253 06:00:09 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:39.253 06:00:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:39.511 06:00:10 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:39.511 06:00:10 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:39.511 06:00:10 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:2451f292-5a6f-43fb-b99e-15b740d0f23f bdev_register:5d53aa20-4c3c-462e-998a-7ab20d8ddbc7 bdev_register:32046ffc-ba63-4b87-b557-ebd65943a3f5 bdev_register:f2b7bffa-75bc-4563-a294-b5655850862b 00:07:39.511 06:00:10 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:39.511 06:00:10 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:39.511 06:00:10 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:39.511 06:00:10 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:2451f292-5a6f-43fb-b99e-15b740d0f23f bdev_register:5d53aa20-4c3c-462e-998a-7ab20d8ddbc7 bdev_register:32046ffc-ba63-4b87-b557-ebd65943a3f5 bdev_register:f2b7bffa-75bc-4563-a294-b5655850862b 00:07:39.511 06:00:10 -- json_config/json_config.sh@74 -- # sort 00:07:39.511 06:00:10 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:39.511 06:00:10 -- json_config/json_config.sh@75 -- # sort 00:07:39.511 06:00:10 -- json_config/json_config.sh@75 -- # get_notifications 00:07:39.511 06:00:10 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:39.511 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.511 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.511 06:00:10 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:39.511 06:00:10 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:39.511 06:00:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:2451f292-5a6f-43fb-b99e-15b740d0f23f 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:5d53aa20-4c3c-462e-998a-7ab20d8ddbc7 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:32046ffc-ba63-4b87-b557-ebd65943a3f5 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:39.783 06:00:10 -- json_config/json_config.sh@65 -- # echo bdev_register:f2b7bffa-75bc-4563-a294-b5655850862b 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # IFS=: 00:07:39.783 06:00:10 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:40.047 06:00:10 -- json_config/json_config.sh@77 -- # [[ bdev_register:2451f292-5a6f-43fb-b99e-15b740d0f23f bdev_register:32046ffc-ba63-4b87-b557-ebd65943a3f5 bdev_register:5d53aa20-4c3c-462e-998a-7ab20d8ddbc7 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:f2b7bffa-75bc-4563-a294-b5655850862b != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\4\5\1\f\2\9\2\-\5\a\6\f\-\4\3\f\b\-\b\9\9\e\-\1\5\b\7\4\0\d\0\f\2\3\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\2\0\4\6\f\f\c\-\b\a\6\3\-\4\b\8\7\-\b\5\5\7\-\e\b\d\6\5\9\4\3\a\3\f\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\d\5\3\a\a\2\0\-\4\c\3\c\-\4\6\2\e\-\9\9\8\a\-\7\a\b\2\0\d\8\d\d\b\c\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\2\b\7\b\f\f\a\-\7\5\b\c\-\4\5\6\3\-\a\2\9\4\-\b\5\6\5\5\8\5\0\8\6\2\b ]] 00:07:40.047 06:00:10 -- json_config/json_config.sh@89 -- # cat 00:07:40.047 06:00:10 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:2451f292-5a6f-43fb-b99e-15b740d0f23f bdev_register:32046ffc-ba63-4b87-b557-ebd65943a3f5 bdev_register:5d53aa20-4c3c-462e-998a-7ab20d8ddbc7 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:f2b7bffa-75bc-4563-a294-b5655850862b 00:07:40.047 Expected events matched: 00:07:40.047 bdev_register:2451f292-5a6f-43fb-b99e-15b740d0f23f 00:07:40.047 bdev_register:32046ffc-ba63-4b87-b557-ebd65943a3f5 00:07:40.047 bdev_register:5d53aa20-4c3c-462e-998a-7ab20d8ddbc7 00:07:40.047 bdev_register:Malloc0 00:07:40.047 bdev_register:Malloc0p0 00:07:40.047 bdev_register:Malloc0p1 00:07:40.047 bdev_register:Malloc0p2 00:07:40.047 bdev_register:Malloc1 00:07:40.047 bdev_register:Malloc3 00:07:40.047 bdev_register:Null0 00:07:40.047 bdev_register:Nvme0n1 00:07:40.047 bdev_register:Nvme0n1p0 00:07:40.047 bdev_register:Nvme0n1p1 00:07:40.047 bdev_register:PTBdevFromMalloc3 00:07:40.047 bdev_register:aio_disk 00:07:40.047 bdev_register:f2b7bffa-75bc-4563-a294-b5655850862b 00:07:40.047 06:00:10 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:40.047 06:00:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.047 06:00:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.047 06:00:10 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:40.047 06:00:10 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:40.047 06:00:10 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:40.047 06:00:10 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:40.047 06:00:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.047 06:00:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.047 06:00:10 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:40.047 06:00:10 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:40.047 06:00:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:40.305 MallocBdevForConfigChangeCheck 00:07:40.305 06:00:10 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:40.305 06:00:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.305 06:00:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.305 06:00:10 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:40.305 06:00:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:40.872 INFO: shutting down applications... 00:07:40.872 06:00:11 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:40.872 06:00:11 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:40.872 06:00:11 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:40.872 06:00:11 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:40.872 06:00:11 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:40.872 [2024-06-11 06:00:11.464297] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:41.128 Calling clear_vhost_scsi_subsystem 00:07:41.128 Calling clear_iscsi_subsystem 00:07:41.128 Calling clear_vhost_blk_subsystem 00:07:41.128 Calling clear_nbd_subsystem 00:07:41.128 Calling clear_nvmf_subsystem 00:07:41.128 Calling clear_bdev_subsystem 00:07:41.128 Calling clear_accel_subsystem 00:07:41.128 Calling clear_iobuf_subsystem 00:07:41.128 Calling clear_sock_subsystem 00:07:41.128 Calling clear_vmd_subsystem 00:07:41.129 Calling clear_scheduler_subsystem 00:07:41.129 06:00:11 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:41.129 06:00:11 -- json_config/json_config.sh@396 -- # count=100 00:07:41.129 06:00:11 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:41.129 06:00:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:41.129 06:00:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:41.129 06:00:11 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:41.695 06:00:12 -- json_config/json_config.sh@398 -- # break 00:07:41.695 06:00:12 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:41.695 06:00:12 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:41.695 06:00:12 -- json_config/json_config.sh@120 -- # local app=target 00:07:41.695 06:00:12 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:41.695 06:00:12 -- json_config/json_config.sh@124 -- # [[ -n 103234 ]] 00:07:41.695 06:00:12 -- json_config/json_config.sh@127 -- # kill -SIGINT 103234 00:07:41.695 06:00:12 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:41.695 06:00:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:41.695 06:00:12 -- json_config/json_config.sh@130 -- # kill -0 103234 00:07:41.695 06:00:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:42.262 06:00:12 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:42.262 06:00:12 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:42.262 06:00:12 -- json_config/json_config.sh@130 -- # kill -0 103234 00:07:42.262 06:00:12 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:42.520 06:00:13 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:42.520 06:00:13 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:42.520 06:00:13 -- json_config/json_config.sh@130 -- # kill -0 103234 00:07:42.520 06:00:13 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:43.087 06:00:13 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:43.087 06:00:13 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:43.087 06:00:13 -- json_config/json_config.sh@130 -- # kill -0 103234 00:07:43.087 06:00:13 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:43.087 06:00:13 -- json_config/json_config.sh@132 -- # break 00:07:43.087 06:00:13 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:43.087 SPDK target shutdown done 00:07:43.087 06:00:13 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:43.087 INFO: relaunching applications... 00:07:43.087 06:00:13 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:43.087 06:00:13 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:43.087 06:00:13 -- json_config/json_config.sh@98 -- # local app=target 00:07:43.087 06:00:13 -- json_config/json_config.sh@99 -- # shift 00:07:43.087 06:00:13 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:43.087 06:00:13 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:43.087 06:00:13 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:43.087 06:00:13 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:43.087 06:00:13 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:43.087 06:00:13 -- json_config/json_config.sh@111 -- # app_pid[$app]=103529 00:07:43.087 Waiting for target to run... 00:07:43.087 06:00:13 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:43.087 06:00:13 -- json_config/json_config.sh@114 -- # waitforlisten 103529 /var/tmp/spdk_tgt.sock 00:07:43.087 06:00:13 -- common/autotest_common.sh@819 -- # '[' -z 103529 ']' 00:07:43.087 06:00:13 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:43.087 06:00:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:43.087 06:00:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:43.087 06:00:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:43.087 06:00:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.087 06:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.087 [2024-06-11 06:00:13.727119] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:43.087 [2024-06-11 06:00:13.727382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103529 ] 00:07:44.048 [2024-06-11 06:00:14.327013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.048 [2024-06-11 06:00:14.610461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:44.048 [2024-06-11 06:00:14.610780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.982 [2024-06-11 06:00:15.447333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:44.982 [2024-06-11 06:00:15.447482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:44.982 [2024-06-11 06:00:15.455308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:44.982 [2024-06-11 06:00:15.455394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:44.982 [2024-06-11 06:00:15.463323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.982 [2024-06-11 06:00:15.463427] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:44.982 [2024-06-11 06:00:15.463469] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:44.982 [2024-06-11 06:00:15.556167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.982 [2024-06-11 06:00:15.556311] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.982 [2024-06-11 06:00:15.556360] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:44.982 [2024-06-11 06:00:15.556395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.982 [2024-06-11 06:00:15.557094] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.982 [2024-06-11 06:00:15.557143] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:45.916 06:00:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:45.916 06:00:16 -- common/autotest_common.sh@852 -- # return 0 00:07:45.916 00:07:45.916 06:00:16 -- json_config/json_config.sh@115 -- # echo '' 00:07:45.916 06:00:16 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:45.916 INFO: Checking if target configuration is the same... 00:07:45.916 06:00:16 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:45.916 06:00:16 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.916 06:00:16 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:45.916 06:00:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:45.916 + '[' 2 -ne 2 ']' 00:07:45.916 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:45.916 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:45.916 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:45.916 +++ basename /dev/fd/62 00:07:45.916 ++ mktemp /tmp/62.XXX 00:07:45.916 + tmp_file_1=/tmp/62.o0k 00:07:45.916 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:45.916 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:45.916 + tmp_file_2=/tmp/spdk_tgt_config.json.60s 00:07:45.916 + ret=0 00:07:45.916 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:46.174 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:46.174 + diff -u /tmp/62.o0k /tmp/spdk_tgt_config.json.60s 00:07:46.174 INFO: JSON config files are the same 00:07:46.174 + echo 'INFO: JSON config files are the same' 00:07:46.174 + rm /tmp/62.o0k /tmp/spdk_tgt_config.json.60s 00:07:46.174 + exit 0 00:07:46.174 06:00:16 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:46.174 INFO: changing configuration and checking if this can be detected... 00:07:46.174 06:00:16 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:46.174 06:00:16 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:46.174 06:00:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:46.431 06:00:17 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:46.431 06:00:17 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:46.431 06:00:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:46.431 + '[' 2 -ne 2 ']' 00:07:46.431 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:46.431 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:46.431 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:46.431 +++ basename /dev/fd/62 00:07:46.431 ++ mktemp /tmp/62.XXX 00:07:46.431 + tmp_file_1=/tmp/62.fxr 00:07:46.431 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:46.431 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:46.431 + tmp_file_2=/tmp/spdk_tgt_config.json.0ID 00:07:46.431 + ret=0 00:07:46.431 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:46.997 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:46.997 + diff -u /tmp/62.fxr /tmp/spdk_tgt_config.json.0ID 00:07:46.997 + ret=1 00:07:46.997 + echo '=== Start of file: /tmp/62.fxr ===' 00:07:46.997 + cat /tmp/62.fxr 00:07:46.997 + echo '=== End of file: /tmp/62.fxr ===' 00:07:46.997 + echo '' 00:07:46.997 + echo '=== Start of file: /tmp/spdk_tgt_config.json.0ID ===' 00:07:46.997 + cat /tmp/spdk_tgt_config.json.0ID 00:07:46.997 + echo '=== End of file: /tmp/spdk_tgt_config.json.0ID ===' 00:07:46.997 + echo '' 00:07:46.997 + rm /tmp/62.fxr /tmp/spdk_tgt_config.json.0ID 00:07:46.997 + exit 1 00:07:46.997 INFO: configuration change detected. 00:07:46.997 06:00:17 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:46.997 06:00:17 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:46.997 06:00:17 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:46.997 06:00:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:46.998 06:00:17 -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 06:00:17 -- json_config/json_config.sh@360 -- # local ret=0 00:07:46.998 06:00:17 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:46.998 06:00:17 -- json_config/json_config.sh@370 -- # [[ -n 103529 ]] 00:07:46.998 06:00:17 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:46.998 06:00:17 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:46.998 06:00:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:46.998 06:00:17 -- common/autotest_common.sh@10 -- # set +x 00:07:46.998 06:00:17 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:46.998 06:00:17 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:46.998 06:00:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:47.256 06:00:17 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:47.256 06:00:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:47.515 06:00:18 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:47.515 06:00:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:47.797 06:00:18 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:47.798 06:00:18 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:48.055 06:00:18 -- json_config/json_config.sh@246 -- # uname -s 00:07:48.055 06:00:18 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:48.055 06:00:18 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:48.055 06:00:18 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:48.055 06:00:18 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:48.055 06:00:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:48.056 06:00:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.056 06:00:18 -- json_config/json_config.sh@376 -- # killprocess 103529 00:07:48.056 06:00:18 -- common/autotest_common.sh@926 -- # '[' -z 103529 ']' 00:07:48.056 06:00:18 -- common/autotest_common.sh@930 -- # kill -0 103529 00:07:48.056 06:00:18 -- common/autotest_common.sh@931 -- # uname 00:07:48.056 06:00:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:48.056 06:00:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103529 00:07:48.056 06:00:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:48.056 killing process with pid 103529 00:07:48.056 06:00:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:48.056 06:00:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103529' 00:07:48.056 06:00:18 -- common/autotest_common.sh@945 -- # kill 103529 00:07:48.056 06:00:18 -- common/autotest_common.sh@950 -- # wait 103529 00:07:49.426 06:00:20 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:49.426 06:00:20 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:49.426 06:00:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.426 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:49.426 06:00:20 -- json_config/json_config.sh@381 -- # return 0 00:07:49.426 INFO: Success 00:07:49.426 06:00:20 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:49.426 00:07:49.426 real 0m17.341s 00:07:49.426 user 0m23.955s 00:07:49.426 sys 0m4.041s 00:07:49.426 06:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.426 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:49.426 ************************************ 00:07:49.426 END TEST json_config 00:07:49.426 ************************************ 00:07:49.684 06:00:20 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:49.684 06:00:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:49.684 06:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.684 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:49.684 ************************************ 00:07:49.684 START TEST json_config_extra_key 00:07:49.684 ************************************ 00:07:49.684 06:00:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:49.684 06:00:20 -- nvmf/common.sh@7 -- # uname -s 00:07:49.684 06:00:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.684 06:00:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.684 06:00:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.684 06:00:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.684 06:00:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.684 06:00:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.684 06:00:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.684 06:00:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.684 06:00:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.684 06:00:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.684 06:00:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a6acfd99-8c91-43df-b757-c69e0396a5b2 00:07:49.684 06:00:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=a6acfd99-8c91-43df-b757-c69e0396a5b2 00:07:49.684 06:00:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.684 06:00:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.684 06:00:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:49.684 06:00:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.684 06:00:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.684 06:00:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.684 06:00:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.684 06:00:20 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:49.684 06:00:20 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:49.684 06:00:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:49.684 06:00:20 -- paths/export.sh@5 -- # export PATH 00:07:49.684 06:00:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:49.684 06:00:20 -- nvmf/common.sh@46 -- # : 0 00:07:49.684 06:00:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:49.684 06:00:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:49.684 06:00:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:49.684 06:00:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.684 06:00:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.684 06:00:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:49.684 06:00:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:49.684 06:00:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:49.684 INFO: launching applications... 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:49.684 06:00:20 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=103730 00:07:49.685 Waiting for target to run... 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 103730 /var/tmp/spdk_tgt.sock 00:07:49.685 06:00:20 -- common/autotest_common.sh@819 -- # '[' -z 103730 ']' 00:07:49.685 06:00:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.685 06:00:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:49.685 06:00:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.685 06:00:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:49.685 06:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:49.685 06:00:20 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:49.685 [2024-06-11 06:00:20.271969] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:49.685 [2024-06-11 06:00:20.272167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103730 ] 00:07:50.250 [2024-06-11 06:00:20.869321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.509 [2024-06-11 06:00:21.076104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.509 [2024-06-11 06:00:21.076347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.444 06:00:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:51.444 00:07:51.444 06:00:21 -- common/autotest_common.sh@852 -- # return 0 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:51.444 INFO: shutting down applications... 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 103730 ]] 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 103730 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:51.444 06:00:21 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:52.010 06:00:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:52.010 06:00:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:52.010 06:00:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:52.010 06:00:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:52.289 06:00:22 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:52.289 06:00:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:52.289 06:00:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:52.289 06:00:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:52.855 06:00:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:52.855 06:00:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:52.855 06:00:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:52.855 06:00:23 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:53.420 06:00:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:53.420 06:00:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:53.420 06:00:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:53.420 06:00:23 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:53.985 06:00:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:53.985 06:00:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:53.985 06:00:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:53.985 06:00:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:54.550 06:00:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:54.550 06:00:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:54.550 06:00:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:54.550 06:00:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 103730 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:54.808 SPDK target shutdown done 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:54.808 Success 00:07:54.808 06:00:25 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:54.808 00:07:54.808 real 0m5.309s 00:07:54.808 user 0m5.032s 00:07:54.808 sys 0m0.735s 00:07:54.808 06:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.808 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.808 ************************************ 00:07:54.808 END TEST json_config_extra_key 00:07:54.808 ************************************ 00:07:54.808 06:00:25 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:54.808 06:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:54.808 06:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.808 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.066 ************************************ 00:07:55.066 START TEST alias_rpc 00:07:55.066 ************************************ 00:07:55.066 06:00:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:55.066 * Looking for test storage... 00:07:55.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:55.066 06:00:25 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:55.066 06:00:25 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103864 00:07:55.066 06:00:25 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103864 00:07:55.066 06:00:25 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:55.066 06:00:25 -- common/autotest_common.sh@819 -- # '[' -z 103864 ']' 00:07:55.066 06:00:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.066 06:00:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:55.066 06:00:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.066 06:00:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:55.066 06:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.066 [2024-06-11 06:00:25.657477] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:07:55.066 [2024-06-11 06:00:25.659062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103864 ] 00:07:55.325 [2024-06-11 06:00:25.861633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.582 [2024-06-11 06:00:26.125659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.582 [2024-06-11 06:00:26.125989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.987 06:00:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.988 06:00:27 -- common/autotest_common.sh@852 -- # return 0 00:07:56.988 06:00:27 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:57.246 06:00:27 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103864 00:07:57.246 06:00:27 -- common/autotest_common.sh@926 -- # '[' -z 103864 ']' 00:07:57.246 06:00:27 -- common/autotest_common.sh@930 -- # kill -0 103864 00:07:57.246 06:00:27 -- common/autotest_common.sh@931 -- # uname 00:07:57.246 06:00:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.246 06:00:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103864 00:07:57.246 06:00:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.246 killing process with pid 103864 00:07:57.246 06:00:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.246 06:00:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103864' 00:07:57.246 06:00:27 -- common/autotest_common.sh@945 -- # kill 103864 00:07:57.246 06:00:27 -- common/autotest_common.sh@950 -- # wait 103864 00:08:00.529 00:08:00.529 real 0m5.026s 00:08:00.529 user 0m5.251s 00:08:00.529 sys 0m0.820s 00:08:00.529 06:00:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.529 06:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.529 ************************************ 00:08:00.530 END TEST alias_rpc 00:08:00.530 ************************************ 00:08:00.530 06:00:30 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:00.530 06:00:30 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:00.530 06:00:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:00.530 06:00:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.530 06:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.530 ************************************ 00:08:00.530 START TEST spdkcli_tcp 00:08:00.530 ************************************ 00:08:00.530 06:00:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:00.530 * Looking for test storage... 00:08:00.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:00.530 06:00:30 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:00.530 06:00:30 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:00.530 06:00:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:00.530 06:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=103980 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@27 -- # waitforlisten 103980 00:08:00.530 06:00:30 -- common/autotest_common.sh@819 -- # '[' -z 103980 ']' 00:08:00.530 06:00:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.530 06:00:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:00.530 06:00:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.530 06:00:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:00.530 06:00:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.530 06:00:30 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:00.530 [2024-06-11 06:00:30.743894] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:00.530 [2024-06-11 06:00:30.745032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103980 ] 00:08:00.530 [2024-06-11 06:00:30.943227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.787 [2024-06-11 06:00:31.258493] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:00.787 [2024-06-11 06:00:31.259231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.787 [2024-06-11 06:00:31.259252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.186 06:00:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.186 06:00:32 -- common/autotest_common.sh@852 -- # return 0 00:08:02.186 06:00:32 -- spdkcli/tcp.sh@31 -- # socat_pid=104007 00:08:02.186 06:00:32 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:02.186 06:00:32 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:02.186 [ 00:08:02.186 "spdk_get_version", 00:08:02.186 "rpc_get_methods", 00:08:02.186 "trace_get_info", 00:08:02.186 "trace_get_tpoint_group_mask", 00:08:02.186 "trace_disable_tpoint_group", 00:08:02.186 "trace_enable_tpoint_group", 00:08:02.186 "trace_clear_tpoint_mask", 00:08:02.186 "trace_set_tpoint_mask", 00:08:02.186 "framework_get_pci_devices", 00:08:02.186 "framework_get_config", 00:08:02.186 "framework_get_subsystems", 00:08:02.186 "iobuf_get_stats", 00:08:02.186 "iobuf_set_options", 00:08:02.186 "sock_set_default_impl", 00:08:02.186 "sock_impl_set_options", 00:08:02.186 "sock_impl_get_options", 00:08:02.186 "vmd_rescan", 00:08:02.186 "vmd_remove_device", 00:08:02.186 "vmd_enable", 00:08:02.186 "accel_get_stats", 00:08:02.186 "accel_set_options", 00:08:02.186 "accel_set_driver", 00:08:02.186 "accel_crypto_key_destroy", 00:08:02.186 "accel_crypto_keys_get", 00:08:02.187 "accel_crypto_key_create", 00:08:02.187 "accel_assign_opc", 00:08:02.187 "accel_get_module_info", 00:08:02.187 "accel_get_opc_assignments", 00:08:02.187 "notify_get_notifications", 00:08:02.187 "notify_get_types", 00:08:02.187 "bdev_get_histogram", 00:08:02.187 "bdev_enable_histogram", 00:08:02.187 "bdev_set_qos_limit", 00:08:02.187 "bdev_set_qd_sampling_period", 00:08:02.187 "bdev_get_bdevs", 00:08:02.187 "bdev_reset_iostat", 00:08:02.187 "bdev_get_iostat", 00:08:02.187 "bdev_examine", 00:08:02.187 "bdev_wait_for_examine", 00:08:02.187 "bdev_set_options", 00:08:02.187 "scsi_get_devices", 00:08:02.187 "thread_set_cpumask", 00:08:02.187 "framework_get_scheduler", 00:08:02.187 "framework_set_scheduler", 00:08:02.187 "framework_get_reactors", 00:08:02.187 "thread_get_io_channels", 00:08:02.187 "thread_get_pollers", 00:08:02.187 "thread_get_stats", 00:08:02.187 "framework_monitor_context_switch", 00:08:02.187 "spdk_kill_instance", 00:08:02.187 "log_enable_timestamps", 00:08:02.187 "log_get_flags", 00:08:02.187 "log_clear_flag", 00:08:02.187 "log_set_flag", 00:08:02.187 "log_get_level", 00:08:02.187 "log_set_level", 00:08:02.187 "log_get_print_level", 00:08:02.187 "log_set_print_level", 00:08:02.187 "framework_enable_cpumask_locks", 00:08:02.187 "framework_disable_cpumask_locks", 00:08:02.187 "framework_wait_init", 00:08:02.187 "framework_start_init", 00:08:02.187 "virtio_blk_create_transport", 00:08:02.187 "virtio_blk_get_transports", 00:08:02.187 "vhost_controller_set_coalescing", 00:08:02.187 "vhost_get_controllers", 00:08:02.187 "vhost_delete_controller", 00:08:02.187 "vhost_create_blk_controller", 00:08:02.187 "vhost_scsi_controller_remove_target", 00:08:02.187 "vhost_scsi_controller_add_target", 00:08:02.187 "vhost_start_scsi_controller", 00:08:02.187 "vhost_create_scsi_controller", 00:08:02.187 "nbd_get_disks", 00:08:02.187 "nbd_stop_disk", 00:08:02.187 "nbd_start_disk", 00:08:02.187 "env_dpdk_get_mem_stats", 00:08:02.187 "nvmf_subsystem_get_listeners", 00:08:02.187 "nvmf_subsystem_get_qpairs", 00:08:02.187 "nvmf_subsystem_get_controllers", 00:08:02.187 "nvmf_get_stats", 00:08:02.187 "nvmf_get_transports", 00:08:02.187 "nvmf_create_transport", 00:08:02.187 "nvmf_get_targets", 00:08:02.187 "nvmf_delete_target", 00:08:02.187 "nvmf_create_target", 00:08:02.187 "nvmf_subsystem_allow_any_host", 00:08:02.187 "nvmf_subsystem_remove_host", 00:08:02.187 "nvmf_subsystem_add_host", 00:08:02.187 "nvmf_subsystem_remove_ns", 00:08:02.187 "nvmf_subsystem_add_ns", 00:08:02.187 "nvmf_subsystem_listener_set_ana_state", 00:08:02.187 "nvmf_discovery_get_referrals", 00:08:02.187 "nvmf_discovery_remove_referral", 00:08:02.187 "nvmf_discovery_add_referral", 00:08:02.187 "nvmf_subsystem_remove_listener", 00:08:02.187 "nvmf_subsystem_add_listener", 00:08:02.187 "nvmf_delete_subsystem", 00:08:02.187 "nvmf_create_subsystem", 00:08:02.187 "nvmf_get_subsystems", 00:08:02.187 "nvmf_set_crdt", 00:08:02.187 "nvmf_set_config", 00:08:02.187 "nvmf_set_max_subsystems", 00:08:02.187 "iscsi_set_options", 00:08:02.187 "iscsi_get_auth_groups", 00:08:02.187 "iscsi_auth_group_remove_secret", 00:08:02.187 "iscsi_auth_group_add_secret", 00:08:02.187 "iscsi_delete_auth_group", 00:08:02.187 "iscsi_create_auth_group", 00:08:02.187 "iscsi_set_discovery_auth", 00:08:02.187 "iscsi_get_options", 00:08:02.187 "iscsi_target_node_request_logout", 00:08:02.187 "iscsi_target_node_set_redirect", 00:08:02.187 "iscsi_target_node_set_auth", 00:08:02.187 "iscsi_target_node_add_lun", 00:08:02.187 "iscsi_get_connections", 00:08:02.187 "iscsi_portal_group_set_auth", 00:08:02.187 "iscsi_start_portal_group", 00:08:02.187 "iscsi_delete_portal_group", 00:08:02.187 "iscsi_create_portal_group", 00:08:02.187 "iscsi_get_portal_groups", 00:08:02.187 "iscsi_delete_target_node", 00:08:02.187 "iscsi_target_node_remove_pg_ig_maps", 00:08:02.187 "iscsi_target_node_add_pg_ig_maps", 00:08:02.187 "iscsi_create_target_node", 00:08:02.187 "iscsi_get_target_nodes", 00:08:02.187 "iscsi_delete_initiator_group", 00:08:02.187 "iscsi_initiator_group_remove_initiators", 00:08:02.187 "iscsi_initiator_group_add_initiators", 00:08:02.187 "iscsi_create_initiator_group", 00:08:02.187 "iscsi_get_initiator_groups", 00:08:02.187 "iaa_scan_accel_module", 00:08:02.187 "dsa_scan_accel_module", 00:08:02.187 "ioat_scan_accel_module", 00:08:02.187 "accel_error_inject_error", 00:08:02.187 "bdev_iscsi_delete", 00:08:02.187 "bdev_iscsi_create", 00:08:02.187 "bdev_iscsi_set_options", 00:08:02.187 "bdev_virtio_attach_controller", 00:08:02.187 "bdev_virtio_scsi_get_devices", 00:08:02.187 "bdev_virtio_detach_controller", 00:08:02.187 "bdev_virtio_blk_set_hotplug", 00:08:02.187 "bdev_ftl_set_property", 00:08:02.187 "bdev_ftl_get_properties", 00:08:02.187 "bdev_ftl_get_stats", 00:08:02.187 "bdev_ftl_unmap", 00:08:02.187 "bdev_ftl_unload", 00:08:02.187 "bdev_ftl_delete", 00:08:02.187 "bdev_ftl_load", 00:08:02.187 "bdev_ftl_create", 00:08:02.187 "bdev_aio_delete", 00:08:02.187 "bdev_aio_rescan", 00:08:02.187 "bdev_aio_create", 00:08:02.187 "blobfs_create", 00:08:02.187 "blobfs_detect", 00:08:02.187 "blobfs_set_cache_size", 00:08:02.187 "bdev_zone_block_delete", 00:08:02.187 "bdev_zone_block_create", 00:08:02.187 "bdev_delay_delete", 00:08:02.187 "bdev_delay_create", 00:08:02.187 "bdev_delay_update_latency", 00:08:02.187 "bdev_split_delete", 00:08:02.187 "bdev_split_create", 00:08:02.187 "bdev_error_inject_error", 00:08:02.187 "bdev_error_delete", 00:08:02.187 "bdev_error_create", 00:08:02.187 "bdev_raid_set_options", 00:08:02.187 "bdev_raid_remove_base_bdev", 00:08:02.187 "bdev_raid_add_base_bdev", 00:08:02.187 "bdev_raid_delete", 00:08:02.187 "bdev_raid_create", 00:08:02.187 "bdev_raid_get_bdevs", 00:08:02.187 "bdev_lvol_grow_lvstore", 00:08:02.187 "bdev_lvol_get_lvols", 00:08:02.187 "bdev_lvol_get_lvstores", 00:08:02.187 "bdev_lvol_delete", 00:08:02.187 "bdev_lvol_set_read_only", 00:08:02.187 "bdev_lvol_resize", 00:08:02.187 "bdev_lvol_decouple_parent", 00:08:02.187 "bdev_lvol_inflate", 00:08:02.187 "bdev_lvol_rename", 00:08:02.187 "bdev_lvol_clone_bdev", 00:08:02.187 "bdev_lvol_clone", 00:08:02.187 "bdev_lvol_snapshot", 00:08:02.187 "bdev_lvol_create", 00:08:02.187 "bdev_lvol_delete_lvstore", 00:08:02.187 "bdev_lvol_rename_lvstore", 00:08:02.187 "bdev_lvol_create_lvstore", 00:08:02.187 "bdev_passthru_delete", 00:08:02.187 "bdev_passthru_create", 00:08:02.187 "bdev_nvme_cuse_unregister", 00:08:02.187 "bdev_nvme_cuse_register", 00:08:02.187 "bdev_opal_new_user", 00:08:02.187 "bdev_opal_set_lock_state", 00:08:02.187 "bdev_opal_delete", 00:08:02.187 "bdev_opal_get_info", 00:08:02.187 "bdev_opal_create", 00:08:02.187 "bdev_nvme_opal_revert", 00:08:02.187 "bdev_nvme_opal_init", 00:08:02.187 "bdev_nvme_send_cmd", 00:08:02.187 "bdev_nvme_get_path_iostat", 00:08:02.187 "bdev_nvme_get_mdns_discovery_info", 00:08:02.187 "bdev_nvme_stop_mdns_discovery", 00:08:02.187 "bdev_nvme_start_mdns_discovery", 00:08:02.187 "bdev_nvme_set_multipath_policy", 00:08:02.187 "bdev_nvme_set_preferred_path", 00:08:02.187 "bdev_nvme_get_io_paths", 00:08:02.187 "bdev_nvme_remove_error_injection", 00:08:02.187 "bdev_nvme_add_error_injection", 00:08:02.187 "bdev_nvme_get_discovery_info", 00:08:02.187 "bdev_nvme_stop_discovery", 00:08:02.187 "bdev_nvme_start_discovery", 00:08:02.187 "bdev_nvme_get_controller_health_info", 00:08:02.187 "bdev_nvme_disable_controller", 00:08:02.187 "bdev_nvme_enable_controller", 00:08:02.187 "bdev_nvme_reset_controller", 00:08:02.187 "bdev_nvme_get_transport_statistics", 00:08:02.187 "bdev_nvme_apply_firmware", 00:08:02.187 "bdev_nvme_detach_controller", 00:08:02.187 "bdev_nvme_get_controllers", 00:08:02.187 "bdev_nvme_attach_controller", 00:08:02.187 "bdev_nvme_set_hotplug", 00:08:02.187 "bdev_nvme_set_options", 00:08:02.187 "bdev_null_resize", 00:08:02.187 "bdev_null_delete", 00:08:02.187 "bdev_null_create", 00:08:02.187 "bdev_malloc_delete", 00:08:02.187 "bdev_malloc_create" 00:08:02.187 ] 00:08:02.187 06:00:32 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:02.187 06:00:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:02.187 06:00:32 -- common/autotest_common.sh@10 -- # set +x 00:08:02.187 06:00:32 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:02.187 06:00:32 -- spdkcli/tcp.sh@38 -- # killprocess 103980 00:08:02.187 06:00:32 -- common/autotest_common.sh@926 -- # '[' -z 103980 ']' 00:08:02.187 06:00:32 -- common/autotest_common.sh@930 -- # kill -0 103980 00:08:02.187 06:00:32 -- common/autotest_common.sh@931 -- # uname 00:08:02.187 06:00:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:02.187 06:00:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 103980 00:08:02.187 06:00:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:02.187 06:00:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:02.187 06:00:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 103980' 00:08:02.187 killing process with pid 103980 00:08:02.187 06:00:32 -- common/autotest_common.sh@945 -- # kill 103980 00:08:02.187 06:00:32 -- common/autotest_common.sh@950 -- # wait 103980 00:08:05.469 00:08:05.470 real 0m4.886s 00:08:05.470 user 0m8.712s 00:08:05.470 sys 0m0.793s 00:08:05.470 06:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.470 06:00:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.470 ************************************ 00:08:05.470 END TEST spdkcli_tcp 00:08:05.470 ************************************ 00:08:05.470 06:00:35 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.470 06:00:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:05.470 06:00:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.470 06:00:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.470 ************************************ 00:08:05.470 START TEST dpdk_mem_utility 00:08:05.470 ************************************ 00:08:05.470 06:00:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.470 * Looking for test storage... 00:08:05.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:05.470 06:00:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:05.470 06:00:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104112 00:08:05.470 06:00:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104112 00:08:05.470 06:00:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:05.470 06:00:35 -- common/autotest_common.sh@819 -- # '[' -z 104112 ']' 00:08:05.470 06:00:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.470 06:00:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:05.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.470 06:00:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.470 06:00:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:05.470 06:00:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.470 [2024-06-11 06:00:35.686610] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:05.470 [2024-06-11 06:00:35.686844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104112 ] 00:08:05.470 [2024-06-11 06:00:35.869821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.728 [2024-06-11 06:00:36.118253] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:05.728 [2024-06-11 06:00:36.118505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.718 06:00:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:06.718 06:00:37 -- common/autotest_common.sh@852 -- # return 0 00:08:06.718 06:00:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:06.718 06:00:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:06.718 06:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:06.718 06:00:37 -- common/autotest_common.sh@10 -- # set +x 00:08:06.718 { 00:08:06.718 "filename": "/tmp/spdk_mem_dump.txt" 00:08:06.718 } 00:08:06.718 06:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:06.718 06:00:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:06.977 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:06.977 1 heaps totaling size 820.000000 MiB 00:08:06.977 size: 820.000000 MiB heap id: 0 00:08:06.977 end heaps---------- 00:08:06.977 8 mempools totaling size 598.116089 MiB 00:08:06.977 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:06.977 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:06.977 size: 84.521057 MiB name: bdev_io_104112 00:08:06.977 size: 51.011292 MiB name: evtpool_104112 00:08:06.977 size: 50.003479 MiB name: msgpool_104112 00:08:06.977 size: 21.763794 MiB name: PDU_Pool 00:08:06.977 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:06.977 size: 0.026123 MiB name: Session_Pool 00:08:06.977 end mempools------- 00:08:06.977 6 memzones totaling size 4.142822 MiB 00:08:06.977 size: 1.000366 MiB name: RG_ring_0_104112 00:08:06.977 size: 1.000366 MiB name: RG_ring_1_104112 00:08:06.977 size: 1.000366 MiB name: RG_ring_4_104112 00:08:06.977 size: 1.000366 MiB name: RG_ring_5_104112 00:08:06.977 size: 0.125366 MiB name: RG_ring_2_104112 00:08:06.977 size: 0.015991 MiB name: RG_ring_3_104112 00:08:06.977 end memzones------- 00:08:06.977 06:00:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:06.977 heap id: 0 total size: 820.000000 MiB number of busy elements: 222 number of free elements: 18 00:08:06.977 list of free elements. size: 18.470703 MiB 00:08:06.977 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:06.977 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:06.977 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:06.977 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:06.977 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:06.977 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:06.977 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:06.977 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:06.977 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:06.977 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:06.977 element at address: 0x200019900040 with size: 0.937256 MiB 00:08:06.977 element at address: 0x200000200000 with size: 0.835083 MiB 00:08:06.977 element at address: 0x20001b000000 with size: 0.561951 MiB 00:08:06.977 element at address: 0x200019200000 with size: 0.489197 MiB 00:08:06.977 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:06.977 element at address: 0x200013800000 with size: 0.468140 MiB 00:08:06.977 element at address: 0x200028400000 with size: 0.399963 MiB 00:08:06.977 element at address: 0x200003a00000 with size: 0.356140 MiB 00:08:06.977 list of standard malloc elements. size: 199.264893 MiB 00:08:06.977 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:06.977 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:06.977 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:06.977 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:06.977 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:06.978 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:06.978 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:06.978 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:06.978 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:08:06.978 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:08:06.978 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:06.978 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:06.978 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:06.978 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:06.979 element at address: 0x200028466640 with size: 0.000244 MiB 00:08:06.979 element at address: 0x200028466740 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846d400 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:06.979 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:06.979 list of memzone associated elements. size: 602.264404 MiB 00:08:06.979 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:06.979 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:06.979 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:06.979 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:06.979 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:06.979 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_104112_0 00:08:06.979 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:06.979 associated memzone info: size: 48.002930 MiB name: MP_evtpool_104112_0 00:08:06.979 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:06.979 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104112_0 00:08:06.979 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:06.979 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:06.979 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:06.979 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:06.979 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:06.979 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_104112 00:08:06.979 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:06.979 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104112 00:08:06.979 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:06.979 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104112 00:08:06.979 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:06.979 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:06.979 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:06.979 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:06.979 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:06.979 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:06.979 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:06.979 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:06.979 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:06.979 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104112 00:08:06.979 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:06.979 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104112 00:08:06.979 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:06.979 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104112 00:08:06.979 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:06.979 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104112 00:08:06.979 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:06.979 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104112 00:08:06.979 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:06.979 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:06.979 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:06.979 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:06.979 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:06.979 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:06.979 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:06.979 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104112 00:08:06.979 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:06.979 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:06.979 element at address: 0x200028466840 with size: 0.023804 MiB 00:08:06.979 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:06.979 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:06.979 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104112 00:08:06.979 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:08:06.979 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:06.979 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:08:06.979 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104112 00:08:06.979 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:06.979 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104112 00:08:06.979 element at address: 0x20002846d500 with size: 0.000366 MiB 00:08:06.979 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:06.979 06:00:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:06.979 06:00:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104112 00:08:06.979 06:00:37 -- common/autotest_common.sh@926 -- # '[' -z 104112 ']' 00:08:06.979 06:00:37 -- common/autotest_common.sh@930 -- # kill -0 104112 00:08:06.979 06:00:37 -- common/autotest_common.sh@931 -- # uname 00:08:06.980 06:00:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:06.980 06:00:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104112 00:08:06.980 06:00:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:06.980 06:00:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:06.980 06:00:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104112' 00:08:06.980 killing process with pid 104112 00:08:06.980 06:00:37 -- common/autotest_common.sh@945 -- # kill 104112 00:08:06.980 06:00:37 -- common/autotest_common.sh@950 -- # wait 104112 00:08:10.257 ************************************ 00:08:10.257 END TEST dpdk_mem_utility 00:08:10.257 ************************************ 00:08:10.257 00:08:10.257 real 0m4.766s 00:08:10.257 user 0m4.799s 00:08:10.257 sys 0m0.736s 00:08:10.257 06:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.257 06:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:10.257 06:00:40 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:10.257 06:00:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.257 06:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.257 06:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:10.257 ************************************ 00:08:10.257 START TEST event 00:08:10.257 ************************************ 00:08:10.257 06:00:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:10.257 * Looking for test storage... 00:08:10.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:10.257 06:00:40 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:10.257 06:00:40 -- bdev/nbd_common.sh@6 -- # set -e 00:08:10.257 06:00:40 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.257 06:00:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:10.257 06:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.257 06:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:10.257 ************************************ 00:08:10.257 START TEST event_perf 00:08:10.257 ************************************ 00:08:10.257 06:00:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.257 Running I/O for 1 seconds...[2024-06-11 06:00:40.491477] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:10.257 [2024-06-11 06:00:40.491867] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104233 ] 00:08:10.257 [2024-06-11 06:00:40.717906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.514 [2024-06-11 06:00:41.075681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.514 [2024-06-11 06:00:41.075761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.514 [2024-06-11 06:00:41.075831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.514 [2024-06-11 06:00:41.075833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.900 Running I/O for 1 seconds... 00:08:11.900 lcore 0: 184932 00:08:11.900 lcore 1: 184931 00:08:11.900 lcore 2: 184930 00:08:11.900 lcore 3: 184931 00:08:11.900 done. 00:08:12.160 00:08:12.160 real 0m2.117s 00:08:12.160 user 0m4.840s 00:08:12.160 sys 0m0.180s 00:08:12.160 06:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.160 06:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.160 ************************************ 00:08:12.160 END TEST event_perf 00:08:12.160 ************************************ 00:08:12.160 06:00:42 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:12.160 06:00:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:12.160 06:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:12.160 06:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.160 ************************************ 00:08:12.160 START TEST event_reactor 00:08:12.160 ************************************ 00:08:12.160 06:00:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:12.160 [2024-06-11 06:00:42.667901] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:12.160 [2024-06-11 06:00:42.668709] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104292 ] 00:08:12.419 [2024-06-11 06:00:42.846787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.677 [2024-06-11 06:00:43.116762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.049 test_start 00:08:14.049 oneshot 00:08:14.049 tick 100 00:08:14.049 tick 100 00:08:14.049 tick 250 00:08:14.049 tick 100 00:08:14.049 tick 100 00:08:14.049 tick 100 00:08:14.049 tick 250 00:08:14.049 tick 500 00:08:14.049 tick 100 00:08:14.049 tick 100 00:08:14.049 tick 250 00:08:14.049 tick 100 00:08:14.049 tick 100 00:08:14.049 test_end 00:08:14.049 00:08:14.049 real 0m2.047s 00:08:14.049 user 0m1.777s 00:08:14.049 sys 0m0.169s 00:08:14.049 06:00:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.049 06:00:44 -- common/autotest_common.sh@10 -- # set +x 00:08:14.049 ************************************ 00:08:14.049 END TEST event_reactor 00:08:14.049 ************************************ 00:08:14.305 06:00:44 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:14.305 06:00:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:14.305 06:00:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.305 06:00:44 -- common/autotest_common.sh@10 -- # set +x 00:08:14.305 ************************************ 00:08:14.305 START TEST event_reactor_perf 00:08:14.305 ************************************ 00:08:14.305 06:00:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:14.305 [2024-06-11 06:00:44.752922] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:14.305 [2024-06-11 06:00:44.753124] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104336 ] 00:08:14.305 [2024-06-11 06:00:44.922959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.868 [2024-06-11 06:00:45.234665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.245 test_start 00:08:16.245 test_end 00:08:16.245 Performance: 329457 events per second 00:08:16.245 00:08:16.245 real 0m2.091s 00:08:16.245 user 0m1.863s 00:08:16.245 sys 0m0.128s 00:08:16.245 06:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.245 06:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.245 ************************************ 00:08:16.245 END TEST event_reactor_perf 00:08:16.245 ************************************ 00:08:16.245 06:00:46 -- event/event.sh@49 -- # uname -s 00:08:16.245 06:00:46 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:16.245 06:00:46 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:16.245 06:00:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:16.245 06:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.245 06:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.245 ************************************ 00:08:16.245 START TEST event_scheduler 00:08:16.245 ************************************ 00:08:16.245 06:00:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:16.555 * Looking for test storage... 00:08:16.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:16.555 06:00:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:16.555 06:00:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=104420 00:08:16.555 06:00:46 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:16.555 06:00:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:16.555 06:00:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 104420 00:08:16.555 06:00:46 -- common/autotest_common.sh@819 -- # '[' -z 104420 ']' 00:08:16.555 06:00:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.555 06:00:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:16.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.555 06:00:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.555 06:00:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:16.555 06:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.555 [2024-06-11 06:00:47.005586] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:16.555 [2024-06-11 06:00:47.006626] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104420 ] 00:08:16.813 [2024-06-11 06:00:47.223123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.071 [2024-06-11 06:00:47.508916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.071 [2024-06-11 06:00:47.509088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.071 [2024-06-11 06:00:47.509019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.071 [2024-06-11 06:00:47.509089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.331 06:00:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:17.331 06:00:47 -- common/autotest_common.sh@852 -- # return 0 00:08:17.331 06:00:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:17.331 06:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.331 06:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.331 POWER: Env isn't set yet! 00:08:17.331 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:17.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.331 POWER: Cannot set governor of lcore 0 to userspace 00:08:17.331 POWER: Attempting to initialise PSTAT power management... 00:08:17.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.331 POWER: Cannot set governor of lcore 0 to performance 00:08:17.331 POWER: Attempting to initialise AMD PSTATE power management... 00:08:17.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.331 POWER: Cannot set governor of lcore 0 to userspace 00:08:17.331 POWER: Attempting to initialise CPPC power management... 00:08:17.331 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:17.331 POWER: Cannot set governor of lcore 0 to userspace 00:08:17.331 POWER: Attempting to initialise VM power management... 00:08:17.331 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:17.331 POWER: Unable to set Power Management Environment for lcore 0 00:08:17.331 [2024-06-11 06:00:47.968246] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:17.331 [2024-06-11 06:00:47.968325] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:17.331 [2024-06-11 06:00:47.968349] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:17.331 06:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.331 06:00:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:17.331 06:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.331 06:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 [2024-06-11 06:00:48.420642] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:17.896 06:00:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:17.896 06:00:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 ************************************ 00:08:17.896 START TEST scheduler_create_thread 00:08:17.896 ************************************ 00:08:17.896 06:00:48 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 2 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 3 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 4 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 5 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 6 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 7 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 8 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 9 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 10 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.896 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:17.896 06:00:48 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:17.896 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:17.896 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.154 06:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:18.154 06:00:48 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:18.154 06:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:18.154 06:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:19.527 06:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.527 06:00:50 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:19.527 06:00:50 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:19.527 06:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.527 06:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.459 06:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.459 00:08:20.459 real 0m2.636s 00:08:20.459 user 0m0.017s 00:08:20.459 sys 0m0.010s 00:08:20.459 06:00:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.459 ************************************ 00:08:20.459 06:00:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.459 END TEST scheduler_create_thread 00:08:20.459 ************************************ 00:08:20.718 06:00:51 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:20.718 06:00:51 -- scheduler/scheduler.sh@46 -- # killprocess 104420 00:08:20.718 06:00:51 -- common/autotest_common.sh@926 -- # '[' -z 104420 ']' 00:08:20.718 06:00:51 -- common/autotest_common.sh@930 -- # kill -0 104420 00:08:20.718 06:00:51 -- common/autotest_common.sh@931 -- # uname 00:08:20.718 06:00:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:20.718 06:00:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104420 00:08:20.718 06:00:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:08:20.718 06:00:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:08:20.718 killing process with pid 104420 00:08:20.718 06:00:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104420' 00:08:20.718 06:00:51 -- common/autotest_common.sh@945 -- # kill 104420 00:08:20.718 06:00:51 -- common/autotest_common.sh@950 -- # wait 104420 00:08:20.975 [2024-06-11 06:00:51.454502] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:22.872 00:08:22.872 real 0m6.269s 00:08:22.872 user 0m12.774s 00:08:22.872 sys 0m0.546s 00:08:22.872 06:00:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.872 06:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.872 ************************************ 00:08:22.872 END TEST event_scheduler 00:08:22.872 ************************************ 00:08:22.872 06:00:53 -- event/event.sh@51 -- # modprobe -n nbd 00:08:22.872 06:00:53 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:22.872 06:00:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.872 06:00:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.872 06:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.872 ************************************ 00:08:22.872 START TEST app_repeat 00:08:22.872 ************************************ 00:08:22.872 06:00:53 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:08:22.872 06:00:53 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.872 06:00:53 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.872 06:00:53 -- event/event.sh@13 -- # local nbd_list 00:08:22.872 06:00:53 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.872 06:00:53 -- event/event.sh@14 -- # local bdev_list 00:08:22.872 06:00:53 -- event/event.sh@15 -- # local repeat_times=4 00:08:22.872 06:00:53 -- event/event.sh@17 -- # modprobe nbd 00:08:22.872 06:00:53 -- event/event.sh@19 -- # repeat_pid=104556 00:08:22.872 06:00:53 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.872 Process app_repeat pid: 104556 00:08:22.872 06:00:53 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 104556' 00:08:22.872 06:00:53 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:22.872 06:00:53 -- event/event.sh@23 -- # for i in {0..2} 00:08:22.872 spdk_app_start Round 0 00:08:22.872 06:00:53 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:22.872 06:00:53 -- event/event.sh@25 -- # waitforlisten 104556 /var/tmp/spdk-nbd.sock 00:08:22.872 06:00:53 -- common/autotest_common.sh@819 -- # '[' -z 104556 ']' 00:08:22.872 06:00:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:22.872 06:00:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:22.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:22.872 06:00:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:22.872 06:00:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:22.872 06:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.872 [2024-06-11 06:00:53.252031] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:22.872 [2024-06-11 06:00:53.252256] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104556 ] 00:08:22.872 [2024-06-11 06:00:53.443292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.129 [2024-06-11 06:00:53.735446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.129 [2024-06-11 06:00:53.735447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.694 06:00:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:23.694 06:00:54 -- common/autotest_common.sh@852 -- # return 0 00:08:23.694 06:00:54 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.259 Malloc0 00:08:24.259 06:00:54 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.517 Malloc1 00:08:24.517 06:00:54 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@12 -- # local i 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.517 06:00:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.780 /dev/nbd0 00:08:24.780 06:00:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.780 06:00:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.780 06:00:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:24.780 06:00:55 -- common/autotest_common.sh@857 -- # local i 00:08:24.780 06:00:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:24.780 06:00:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:24.780 06:00:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:24.780 06:00:55 -- common/autotest_common.sh@861 -- # break 00:08:24.780 06:00:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:24.780 06:00:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:24.780 06:00:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.780 1+0 records in 00:08:24.780 1+0 records out 00:08:24.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00046673 s, 8.8 MB/s 00:08:24.780 06:00:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.780 06:00:55 -- common/autotest_common.sh@874 -- # size=4096 00:08:24.780 06:00:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.780 06:00:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:24.780 06:00:55 -- common/autotest_common.sh@877 -- # return 0 00:08:24.780 06:00:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.780 06:00:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.780 06:00:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.061 /dev/nbd1 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.061 06:00:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:25.061 06:00:55 -- common/autotest_common.sh@857 -- # local i 00:08:25.061 06:00:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:25.061 06:00:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:25.061 06:00:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:25.061 06:00:55 -- common/autotest_common.sh@861 -- # break 00:08:25.061 06:00:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:25.061 06:00:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:25.061 06:00:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.061 1+0 records in 00:08:25.061 1+0 records out 00:08:25.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364153 s, 11.2 MB/s 00:08:25.061 06:00:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.061 06:00:55 -- common/autotest_common.sh@874 -- # size=4096 00:08:25.061 06:00:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.061 06:00:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:25.061 06:00:55 -- common/autotest_common.sh@877 -- # return 0 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.061 06:00:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.320 06:00:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.320 { 00:08:25.320 "nbd_device": "/dev/nbd0", 00:08:25.320 "bdev_name": "Malloc0" 00:08:25.320 }, 00:08:25.320 { 00:08:25.320 "nbd_device": "/dev/nbd1", 00:08:25.320 "bdev_name": "Malloc1" 00:08:25.320 } 00:08:25.320 ]' 00:08:25.320 06:00:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.320 06:00:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.320 { 00:08:25.320 "nbd_device": "/dev/nbd0", 00:08:25.320 "bdev_name": "Malloc0" 00:08:25.320 }, 00:08:25.320 { 00:08:25.320 "nbd_device": "/dev/nbd1", 00:08:25.320 "bdev_name": "Malloc1" 00:08:25.320 } 00:08:25.320 ]' 00:08:25.578 06:00:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.578 /dev/nbd1' 00:08:25.578 06:00:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.578 06:00:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.578 /dev/nbd1' 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.578 256+0 records in 00:08:25.578 256+0 records out 00:08:25.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00928907 s, 113 MB/s 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.578 256+0 records in 00:08:25.578 256+0 records out 00:08:25.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272512 s, 38.5 MB/s 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.578 256+0 records in 00:08:25.578 256+0 records out 00:08:25.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303245 s, 34.6 MB/s 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.578 06:00:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.579 06:00:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@41 -- # break 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.837 06:00:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@41 -- # break 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.095 06:00:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.354 06:00:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.354 06:00:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.354 06:00:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@65 -- # true 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.613 06:00:57 -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.613 06:00:57 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.180 06:00:57 -- event/event.sh@35 -- # sleep 3 00:08:28.554 [2024-06-11 06:00:59.116707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:28.811 [2024-06-11 06:00:59.393690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.811 [2024-06-11 06:00:59.393694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.069 [2024-06-11 06:00:59.662544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:29.069 [2024-06-11 06:00:59.663091] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.003 06:01:00 -- event/event.sh@23 -- # for i in {0..2} 00:08:30.003 06:01:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:30.004 spdk_app_start Round 1 00:08:30.004 06:01:00 -- event/event.sh@25 -- # waitforlisten 104556 /var/tmp/spdk-nbd.sock 00:08:30.004 06:01:00 -- common/autotest_common.sh@819 -- # '[' -z 104556 ']' 00:08:30.004 06:01:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.004 06:01:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:30.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.004 06:01:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.004 06:01:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:30.004 06:01:00 -- common/autotest_common.sh@10 -- # set +x 00:08:30.261 06:01:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:30.261 06:01:00 -- common/autotest_common.sh@852 -- # return 0 00:08:30.261 06:01:00 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.518 Malloc0 00:08:30.518 06:01:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.083 Malloc1 00:08:31.083 06:01:01 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@12 -- # local i 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.083 06:01:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.340 /dev/nbd0 00:08:31.340 06:01:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.340 06:01:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.340 06:01:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:31.340 06:01:01 -- common/autotest_common.sh@857 -- # local i 00:08:31.340 06:01:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:31.340 06:01:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:31.340 06:01:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:31.340 06:01:01 -- common/autotest_common.sh@861 -- # break 00:08:31.341 06:01:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:31.341 06:01:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:31.341 06:01:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.341 1+0 records in 00:08:31.341 1+0 records out 00:08:31.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026383 s, 15.5 MB/s 00:08:31.341 06:01:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.341 06:01:01 -- common/autotest_common.sh@874 -- # size=4096 00:08:31.341 06:01:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.341 06:01:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:31.341 06:01:01 -- common/autotest_common.sh@877 -- # return 0 00:08:31.341 06:01:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.341 06:01:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.341 06:01:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:31.598 /dev/nbd1 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:31.598 06:01:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:31.598 06:01:02 -- common/autotest_common.sh@857 -- # local i 00:08:31.598 06:01:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:31.598 06:01:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:31.598 06:01:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:31.598 06:01:02 -- common/autotest_common.sh@861 -- # break 00:08:31.598 06:01:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:31.598 06:01:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:31.598 06:01:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.598 1+0 records in 00:08:31.598 1+0 records out 00:08:31.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503368 s, 8.1 MB/s 00:08:31.598 06:01:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.598 06:01:02 -- common/autotest_common.sh@874 -- # size=4096 00:08:31.598 06:01:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:31.598 06:01:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:31.598 06:01:02 -- common/autotest_common.sh@877 -- # return 0 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.598 06:01:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.162 06:01:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:32.162 { 00:08:32.162 "nbd_device": "/dev/nbd0", 00:08:32.162 "bdev_name": "Malloc0" 00:08:32.162 }, 00:08:32.162 { 00:08:32.162 "nbd_device": "/dev/nbd1", 00:08:32.162 "bdev_name": "Malloc1" 00:08:32.162 } 00:08:32.162 ]' 00:08:32.162 06:01:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:32.162 { 00:08:32.162 "nbd_device": "/dev/nbd0", 00:08:32.162 "bdev_name": "Malloc0" 00:08:32.162 }, 00:08:32.162 { 00:08:32.162 "nbd_device": "/dev/nbd1", 00:08:32.162 "bdev_name": "Malloc1" 00:08:32.162 } 00:08:32.162 ]' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:32.163 /dev/nbd1' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:32.163 /dev/nbd1' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@65 -- # count=2 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@95 -- # count=2 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:32.163 256+0 records in 00:08:32.163 256+0 records out 00:08:32.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00764317 s, 137 MB/s 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:32.163 256+0 records in 00:08:32.163 256+0 records out 00:08:32.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264656 s, 39.6 MB/s 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:32.163 256+0 records in 00:08:32.163 256+0 records out 00:08:32.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0379949 s, 27.6 MB/s 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@51 -- # local i 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.163 06:01:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.421 06:01:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@41 -- # break 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.421 06:01:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@41 -- # break 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.986 06:01:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@65 -- # true 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@65 -- # count=0 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@104 -- # count=0 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:33.243 06:01:03 -- bdev/nbd_common.sh@109 -- # return 0 00:08:33.243 06:01:03 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:33.809 06:01:04 -- event/event.sh@35 -- # sleep 3 00:08:35.737 [2024-06-11 06:01:05.905878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:35.737 [2024-06-11 06:01:06.182163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.737 [2024-06-11 06:01:06.182166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.995 [2024-06-11 06:01:06.456684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:35.995 [2024-06-11 06:01:06.456779] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.560 spdk_app_start Round 2 00:08:36.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.560 06:01:07 -- event/event.sh@23 -- # for i in {0..2} 00:08:36.560 06:01:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:36.560 06:01:07 -- event/event.sh@25 -- # waitforlisten 104556 /var/tmp/spdk-nbd.sock 00:08:36.560 06:01:07 -- common/autotest_common.sh@819 -- # '[' -z 104556 ']' 00:08:36.560 06:01:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.560 06:01:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:36.560 06:01:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.560 06:01:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:36.560 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.124 06:01:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.124 06:01:07 -- common/autotest_common.sh@852 -- # return 0 00:08:37.124 06:01:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.381 Malloc0 00:08:37.381 06:01:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.637 Malloc1 00:08:37.637 06:01:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@12 -- # local i 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.637 06:01:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:37.895 /dev/nbd0 00:08:37.895 06:01:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.895 06:01:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.895 06:01:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:37.895 06:01:08 -- common/autotest_common.sh@857 -- # local i 00:08:37.895 06:01:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:37.895 06:01:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:37.895 06:01:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:37.895 06:01:08 -- common/autotest_common.sh@861 -- # break 00:08:37.895 06:01:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:37.895 06:01:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:37.895 06:01:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.895 1+0 records in 00:08:37.895 1+0 records out 00:08:37.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676323 s, 6.1 MB/s 00:08:37.895 06:01:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.895 06:01:08 -- common/autotest_common.sh@874 -- # size=4096 00:08:37.895 06:01:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.895 06:01:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:37.895 06:01:08 -- common/autotest_common.sh@877 -- # return 0 00:08:37.895 06:01:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.895 06:01:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.895 06:01:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:38.154 /dev/nbd1 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.154 06:01:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:08:38.154 06:01:08 -- common/autotest_common.sh@857 -- # local i 00:08:38.154 06:01:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:38.154 06:01:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:38.154 06:01:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:08:38.154 06:01:08 -- common/autotest_common.sh@861 -- # break 00:08:38.154 06:01:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:38.154 06:01:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:38.154 06:01:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:38.154 1+0 records in 00:08:38.154 1+0 records out 00:08:38.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049944 s, 8.2 MB/s 00:08:38.154 06:01:08 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.154 06:01:08 -- common/autotest_common.sh@874 -- # size=4096 00:08:38.154 06:01:08 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:38.154 06:01:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:38.154 06:01:08 -- common/autotest_common.sh@877 -- # return 0 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.154 06:01:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.412 06:01:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd0", 00:08:38.412 "bdev_name": "Malloc0" 00:08:38.412 }, 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd1", 00:08:38.412 "bdev_name": "Malloc1" 00:08:38.412 } 00:08:38.412 ]' 00:08:38.412 06:01:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.412 06:01:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd0", 00:08:38.412 "bdev_name": "Malloc0" 00:08:38.412 }, 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd1", 00:08:38.412 "bdev_name": "Malloc1" 00:08:38.412 } 00:08:38.412 ]' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.724 /dev/nbd1' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.724 /dev/nbd1' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@65 -- # count=2 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@95 -- # count=2 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:38.724 256+0 records in 00:08:38.724 256+0 records out 00:08:38.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102468 s, 102 MB/s 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.724 256+0 records in 00:08:38.724 256+0 records out 00:08:38.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025024 s, 41.9 MB/s 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:38.724 256+0 records in 00:08:38.724 256+0 records out 00:08:38.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340591 s, 30.8 MB/s 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@51 -- # local i 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.724 06:01:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@41 -- # break 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.983 06:01:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@41 -- # break 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.241 06:01:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@65 -- # true 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@65 -- # count=0 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@104 -- # count=0 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:39.547 06:01:10 -- bdev/nbd_common.sh@109 -- # return 0 00:08:39.547 06:01:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:40.183 06:01:10 -- event/event.sh@35 -- # sleep 3 00:08:41.590 [2024-06-11 06:01:12.152260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.850 [2024-06-11 06:01:12.401875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.850 [2024-06-11 06:01:12.401877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.108 [2024-06-11 06:01:12.646953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:42.108 [2024-06-11 06:01:12.647492] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:43.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:43.042 06:01:13 -- event/event.sh@38 -- # waitforlisten 104556 /var/tmp/spdk-nbd.sock 00:08:43.042 06:01:13 -- common/autotest_common.sh@819 -- # '[' -z 104556 ']' 00:08:43.042 06:01:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:43.042 06:01:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:43.042 06:01:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:43.042 06:01:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:43.042 06:01:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.301 06:01:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:43.301 06:01:13 -- common/autotest_common.sh@852 -- # return 0 00:08:43.301 06:01:13 -- event/event.sh@39 -- # killprocess 104556 00:08:43.301 06:01:13 -- common/autotest_common.sh@926 -- # '[' -z 104556 ']' 00:08:43.301 06:01:13 -- common/autotest_common.sh@930 -- # kill -0 104556 00:08:43.301 06:01:13 -- common/autotest_common.sh@931 -- # uname 00:08:43.301 06:01:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:43.301 06:01:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 104556 00:08:43.301 killing process with pid 104556 00:08:43.301 06:01:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:43.301 06:01:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:43.301 06:01:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 104556' 00:08:43.301 06:01:13 -- common/autotest_common.sh@945 -- # kill 104556 00:08:43.301 06:01:13 -- common/autotest_common.sh@950 -- # wait 104556 00:08:45.202 spdk_app_start is called in Round 0. 00:08:45.202 Shutdown signal received, stop current app iteration 00:08:45.202 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:45.202 spdk_app_start is called in Round 1. 00:08:45.202 Shutdown signal received, stop current app iteration 00:08:45.202 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:45.202 spdk_app_start is called in Round 2. 00:08:45.202 Shutdown signal received, stop current app iteration 00:08:45.202 Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 reinitialization... 00:08:45.202 spdk_app_start is called in Round 3. 00:08:45.202 Shutdown signal received, stop current app iteration 00:08:45.202 ************************************ 00:08:45.202 END TEST app_repeat 00:08:45.202 ************************************ 00:08:45.202 06:01:15 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:45.202 06:01:15 -- event/event.sh@42 -- # return 0 00:08:45.202 00:08:45.202 real 0m22.194s 00:08:45.202 user 0m46.256s 00:08:45.202 sys 0m4.070s 00:08:45.202 06:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.202 06:01:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.202 06:01:15 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:45.202 06:01:15 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:45.202 06:01:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.202 06:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.202 06:01:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.202 ************************************ 00:08:45.202 START TEST cpu_locks 00:08:45.202 ************************************ 00:08:45.202 06:01:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:45.202 * Looking for test storage... 00:08:45.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:45.203 06:01:15 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:45.203 06:01:15 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:45.203 06:01:15 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:45.203 06:01:15 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:45.203 06:01:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.203 06:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.203 06:01:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.203 ************************************ 00:08:45.203 START TEST default_locks 00:08:45.203 ************************************ 00:08:45.203 06:01:15 -- common/autotest_common.sh@1104 -- # default_locks 00:08:45.203 06:01:15 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=105098 00:08:45.203 06:01:15 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:45.203 06:01:15 -- event/cpu_locks.sh@47 -- # waitforlisten 105098 00:08:45.203 06:01:15 -- common/autotest_common.sh@819 -- # '[' -z 105098 ']' 00:08:45.203 06:01:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.203 06:01:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.203 06:01:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.203 06:01:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.203 06:01:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.203 [2024-06-11 06:01:15.641721] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:45.203 [2024-06-11 06:01:15.642686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105098 ] 00:08:45.203 [2024-06-11 06:01:15.808706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.461 [2024-06-11 06:01:16.064236] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.461 [2024-06-11 06:01:16.064460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.886 06:01:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.886 06:01:17 -- common/autotest_common.sh@852 -- # return 0 00:08:46.886 06:01:17 -- event/cpu_locks.sh@49 -- # locks_exist 105098 00:08:46.886 06:01:17 -- event/cpu_locks.sh@22 -- # lslocks -p 105098 00:08:46.886 06:01:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:47.145 06:01:17 -- event/cpu_locks.sh@50 -- # killprocess 105098 00:08:47.145 06:01:17 -- common/autotest_common.sh@926 -- # '[' -z 105098 ']' 00:08:47.145 06:01:17 -- common/autotest_common.sh@930 -- # kill -0 105098 00:08:47.145 06:01:17 -- common/autotest_common.sh@931 -- # uname 00:08:47.145 06:01:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:47.145 06:01:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105098 00:08:47.145 06:01:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:47.145 06:01:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:47.145 killing process with pid 105098 00:08:47.145 06:01:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105098' 00:08:47.145 06:01:17 -- common/autotest_common.sh@945 -- # kill 105098 00:08:47.145 06:01:17 -- common/autotest_common.sh@950 -- # wait 105098 00:08:49.690 06:01:20 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 105098 00:08:49.690 06:01:20 -- common/autotest_common.sh@640 -- # local es=0 00:08:49.690 06:01:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 105098 00:08:49.690 06:01:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:08:49.690 06:01:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:49.690 06:01:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:08:49.690 06:01:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:49.690 06:01:20 -- common/autotest_common.sh@643 -- # waitforlisten 105098 00:08:49.690 06:01:20 -- common/autotest_common.sh@819 -- # '[' -z 105098 ']' 00:08:49.690 06:01:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.690 06:01:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.690 06:01:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.690 06:01:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.690 06:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (105098) - No such process 00:08:49.690 ERROR: process (pid: 105098) is no longer running 00:08:49.690 06:01:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:49.690 06:01:20 -- common/autotest_common.sh@852 -- # return 1 00:08:49.690 06:01:20 -- common/autotest_common.sh@643 -- # es=1 00:08:49.690 06:01:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:49.690 06:01:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:49.690 06:01:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:49.690 06:01:20 -- event/cpu_locks.sh@54 -- # no_locks 00:08:49.690 06:01:20 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:49.690 06:01:20 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:49.690 06:01:20 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:49.690 00:08:49.690 real 0m4.749s 00:08:49.690 user 0m4.769s 00:08:49.690 sys 0m0.891s 00:08:49.690 06:01:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.690 ************************************ 00:08:49.690 END TEST default_locks 00:08:49.690 06:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.690 ************************************ 00:08:49.949 06:01:20 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:49.949 06:01:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:49.949 06:01:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.949 06:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.949 ************************************ 00:08:49.949 START TEST default_locks_via_rpc 00:08:49.949 ************************************ 00:08:49.949 06:01:20 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:08:49.949 06:01:20 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=105194 00:08:49.949 06:01:20 -- event/cpu_locks.sh@63 -- # waitforlisten 105194 00:08:49.949 06:01:20 -- common/autotest_common.sh@819 -- # '[' -z 105194 ']' 00:08:49.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.949 06:01:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.949 06:01:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:49.949 06:01:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.949 06:01:20 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:49.949 06:01:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:49.949 06:01:20 -- common/autotest_common.sh@10 -- # set +x 00:08:49.949 [2024-06-11 06:01:20.465096] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:49.950 [2024-06-11 06:01:20.465332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105194 ] 00:08:50.207 [2024-06-11 06:01:20.644744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.464 [2024-06-11 06:01:20.893497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.464 [2024-06-11 06:01:20.893738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.836 06:01:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.836 06:01:22 -- common/autotest_common.sh@852 -- # return 0 00:08:51.836 06:01:22 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:51.836 06:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.836 06:01:22 -- common/autotest_common.sh@10 -- # set +x 00:08:51.836 06:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.836 06:01:22 -- event/cpu_locks.sh@67 -- # no_locks 00:08:51.836 06:01:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:51.836 06:01:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:51.836 06:01:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:51.836 06:01:22 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:51.836 06:01:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:51.836 06:01:22 -- common/autotest_common.sh@10 -- # set +x 00:08:51.836 06:01:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:51.836 06:01:22 -- event/cpu_locks.sh@71 -- # locks_exist 105194 00:08:51.836 06:01:22 -- event/cpu_locks.sh@22 -- # lslocks -p 105194 00:08:51.836 06:01:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:51.836 06:01:22 -- event/cpu_locks.sh@73 -- # killprocess 105194 00:08:51.836 06:01:22 -- common/autotest_common.sh@926 -- # '[' -z 105194 ']' 00:08:51.836 06:01:22 -- common/autotest_common.sh@930 -- # kill -0 105194 00:08:51.836 06:01:22 -- common/autotest_common.sh@931 -- # uname 00:08:51.836 06:01:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:51.836 06:01:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105194 00:08:51.836 06:01:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:51.836 killing process with pid 105194 00:08:51.836 06:01:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:51.836 06:01:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105194' 00:08:51.836 06:01:22 -- common/autotest_common.sh@945 -- # kill 105194 00:08:51.836 06:01:22 -- common/autotest_common.sh@950 -- # wait 105194 00:08:55.117 00:08:55.117 real 0m4.710s 00:08:55.117 user 0m4.684s 00:08:55.117 sys 0m0.843s 00:08:55.117 06:01:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.117 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.117 ************************************ 00:08:55.117 END TEST default_locks_via_rpc 00:08:55.117 ************************************ 00:08:55.117 06:01:25 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:55.117 06:01:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:55.117 06:01:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:55.117 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.117 ************************************ 00:08:55.117 START TEST non_locking_app_on_locked_coremask 00:08:55.117 ************************************ 00:08:55.117 06:01:25 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:08:55.117 06:01:25 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=105292 00:08:55.117 06:01:25 -- event/cpu_locks.sh@81 -- # waitforlisten 105292 /var/tmp/spdk.sock 00:08:55.117 06:01:25 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.117 06:01:25 -- common/autotest_common.sh@819 -- # '[' -z 105292 ']' 00:08:55.117 06:01:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.117 06:01:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:55.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.117 06:01:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.117 06:01:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:55.117 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.117 [2024-06-11 06:01:25.238615] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:55.117 [2024-06-11 06:01:25.238915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105292 ] 00:08:55.117 [2024-06-11 06:01:25.422992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.117 [2024-06-11 06:01:25.749401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:55.117 [2024-06-11 06:01:25.749722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.489 06:01:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.489 06:01:26 -- common/autotest_common.sh@852 -- # return 0 00:08:56.489 06:01:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=105321 00:08:56.489 06:01:26 -- event/cpu_locks.sh@85 -- # waitforlisten 105321 /var/tmp/spdk2.sock 00:08:56.489 06:01:26 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:56.489 06:01:26 -- common/autotest_common.sh@819 -- # '[' -z 105321 ']' 00:08:56.489 06:01:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.489 06:01:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:56.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.489 06:01:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.489 06:01:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:56.489 06:01:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.489 [2024-06-11 06:01:27.100906] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:08:56.489 [2024-06-11 06:01:27.101233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105321 ] 00:08:56.746 [2024-06-11 06:01:27.287460] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:56.746 [2024-06-11 06:01:27.287559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.310 [2024-06-11 06:01:27.823255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.310 [2024-06-11 06:01:27.823526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.839 06:01:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.839 06:01:30 -- common/autotest_common.sh@852 -- # return 0 00:08:59.839 06:01:30 -- event/cpu_locks.sh@87 -- # locks_exist 105292 00:08:59.839 06:01:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:59.839 06:01:30 -- event/cpu_locks.sh@22 -- # lslocks -p 105292 00:09:00.406 06:01:30 -- event/cpu_locks.sh@89 -- # killprocess 105292 00:09:00.406 06:01:30 -- common/autotest_common.sh@926 -- # '[' -z 105292 ']' 00:09:00.406 06:01:30 -- common/autotest_common.sh@930 -- # kill -0 105292 00:09:00.406 06:01:30 -- common/autotest_common.sh@931 -- # uname 00:09:00.406 06:01:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:00.406 06:01:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105292 00:09:00.406 06:01:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:00.406 06:01:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:00.406 killing process with pid 105292 00:09:00.406 06:01:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105292' 00:09:00.406 06:01:30 -- common/autotest_common.sh@945 -- # kill 105292 00:09:00.406 06:01:30 -- common/autotest_common.sh@950 -- # wait 105292 00:09:06.969 06:01:36 -- event/cpu_locks.sh@90 -- # killprocess 105321 00:09:06.969 06:01:36 -- common/autotest_common.sh@926 -- # '[' -z 105321 ']' 00:09:06.969 06:01:36 -- common/autotest_common.sh@930 -- # kill -0 105321 00:09:06.969 06:01:36 -- common/autotest_common.sh@931 -- # uname 00:09:06.969 06:01:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:06.969 06:01:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105321 00:09:06.969 06:01:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:06.969 killing process with pid 105321 00:09:06.969 06:01:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:06.969 06:01:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105321' 00:09:06.969 06:01:36 -- common/autotest_common.sh@945 -- # kill 105321 00:09:06.969 06:01:36 -- common/autotest_common.sh@950 -- # wait 105321 00:09:08.879 00:09:08.879 real 0m14.066s 00:09:08.879 user 0m14.794s 00:09:08.879 sys 0m2.103s 00:09:08.879 06:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.879 ************************************ 00:09:08.879 END TEST non_locking_app_on_locked_coremask 00:09:08.879 ************************************ 00:09:08.879 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:08.879 06:01:39 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:08.879 06:01:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.879 06:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.879 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:08.879 ************************************ 00:09:08.879 START TEST locking_app_on_unlocked_coremask 00:09:08.880 ************************************ 00:09:08.880 06:01:39 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:08.880 06:01:39 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=105507 00:09:08.880 06:01:39 -- event/cpu_locks.sh@99 -- # waitforlisten 105507 /var/tmp/spdk.sock 00:09:08.880 06:01:39 -- common/autotest_common.sh@819 -- # '[' -z 105507 ']' 00:09:08.880 06:01:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.880 06:01:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.880 06:01:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.880 06:01:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.880 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:08.880 06:01:39 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:08.880 [2024-06-11 06:01:39.368840] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:08.880 [2024-06-11 06:01:39.369048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105507 ] 00:09:09.137 [2024-06-11 06:01:39.548952] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:09.137 [2024-06-11 06:01:39.549034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.395 [2024-06-11 06:01:39.799849] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:09.395 [2024-06-11 06:01:39.800118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.770 06:01:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:10.770 06:01:41 -- common/autotest_common.sh@852 -- # return 0 00:09:10.770 06:01:41 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=105537 00:09:10.770 06:01:41 -- event/cpu_locks.sh@103 -- # waitforlisten 105537 /var/tmp/spdk2.sock 00:09:10.770 06:01:41 -- common/autotest_common.sh@819 -- # '[' -z 105537 ']' 00:09:10.770 06:01:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.770 06:01:41 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:10.770 06:01:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:10.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.770 06:01:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.770 06:01:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:10.770 06:01:41 -- common/autotest_common.sh@10 -- # set +x 00:09:10.770 [2024-06-11 06:01:41.096510] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:10.770 [2024-06-11 06:01:41.096714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105537 ] 00:09:10.770 [2024-06-11 06:01:41.259083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.347 [2024-06-11 06:01:41.803750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:11.347 [2024-06-11 06:01:41.803992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.873 06:01:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:13.873 06:01:44 -- common/autotest_common.sh@852 -- # return 0 00:09:13.873 06:01:44 -- event/cpu_locks.sh@105 -- # locks_exist 105537 00:09:13.873 06:01:44 -- event/cpu_locks.sh@22 -- # lslocks -p 105537 00:09:13.873 06:01:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:14.436 06:01:44 -- event/cpu_locks.sh@107 -- # killprocess 105507 00:09:14.436 06:01:44 -- common/autotest_common.sh@926 -- # '[' -z 105507 ']' 00:09:14.436 06:01:44 -- common/autotest_common.sh@930 -- # kill -0 105507 00:09:14.436 06:01:44 -- common/autotest_common.sh@931 -- # uname 00:09:14.436 06:01:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:14.436 06:01:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105507 00:09:14.436 06:01:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:14.436 killing process with pid 105507 00:09:14.436 06:01:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:14.436 06:01:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105507' 00:09:14.436 06:01:44 -- common/autotest_common.sh@945 -- # kill 105507 00:09:14.436 06:01:44 -- common/autotest_common.sh@950 -- # wait 105507 00:09:19.699 06:01:50 -- event/cpu_locks.sh@108 -- # killprocess 105537 00:09:19.699 06:01:50 -- common/autotest_common.sh@926 -- # '[' -z 105537 ']' 00:09:19.699 06:01:50 -- common/autotest_common.sh@930 -- # kill -0 105537 00:09:19.699 06:01:50 -- common/autotest_common.sh@931 -- # uname 00:09:19.699 06:01:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:19.699 06:01:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105537 00:09:19.699 06:01:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:19.699 killing process with pid 105537 00:09:19.699 06:01:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:19.699 06:01:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105537' 00:09:19.699 06:01:50 -- common/autotest_common.sh@945 -- # kill 105537 00:09:19.699 06:01:50 -- common/autotest_common.sh@950 -- # wait 105537 00:09:22.273 00:09:22.273 real 0m13.477s 00:09:22.273 user 0m14.078s 00:09:22.273 sys 0m1.922s 00:09:22.273 06:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.273 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.273 ************************************ 00:09:22.273 END TEST locking_app_on_unlocked_coremask 00:09:22.273 ************************************ 00:09:22.273 06:01:52 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:22.273 06:01:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:22.273 06:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:22.273 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.273 ************************************ 00:09:22.273 START TEST locking_app_on_locked_coremask 00:09:22.273 ************************************ 00:09:22.273 06:01:52 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:09:22.273 06:01:52 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=105722 00:09:22.273 06:01:52 -- event/cpu_locks.sh@116 -- # waitforlisten 105722 /var/tmp/spdk.sock 00:09:22.273 06:01:52 -- common/autotest_common.sh@819 -- # '[' -z 105722 ']' 00:09:22.273 06:01:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.273 06:01:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:22.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.273 06:01:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.273 06:01:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:22.273 06:01:52 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:22.273 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.273 [2024-06-11 06:01:52.912252] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:22.273 [2024-06-11 06:01:52.912474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105722 ] 00:09:22.532 [2024-06-11 06:01:53.095696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.790 [2024-06-11 06:01:53.357015] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:22.790 [2024-06-11 06:01:53.357262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.166 06:01:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:24.166 06:01:54 -- common/autotest_common.sh@852 -- # return 0 00:09:24.167 06:01:54 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=105750 00:09:24.167 06:01:54 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 105750 /var/tmp/spdk2.sock 00:09:24.167 06:01:54 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:24.167 06:01:54 -- common/autotest_common.sh@640 -- # local es=0 00:09:24.167 06:01:54 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 105750 /var/tmp/spdk2.sock 00:09:24.167 06:01:54 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:24.167 06:01:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.167 06:01:54 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:24.167 06:01:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:24.167 06:01:54 -- common/autotest_common.sh@643 -- # waitforlisten 105750 /var/tmp/spdk2.sock 00:09:24.167 06:01:54 -- common/autotest_common.sh@819 -- # '[' -z 105750 ']' 00:09:24.167 06:01:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:24.167 06:01:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:24.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:24.167 06:01:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:24.167 06:01:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:24.167 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:09:24.167 [2024-06-11 06:01:54.620274] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:24.167 [2024-06-11 06:01:54.620438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105750 ] 00:09:24.167 [2024-06-11 06:01:54.777181] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 105722 has claimed it. 00:09:24.167 [2024-06-11 06:01:54.777263] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:24.733 ERROR: process (pid: 105750) is no longer running 00:09:24.733 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (105750) - No such process 00:09:24.733 06:01:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:24.733 06:01:55 -- common/autotest_common.sh@852 -- # return 1 00:09:24.733 06:01:55 -- common/autotest_common.sh@643 -- # es=1 00:09:24.733 06:01:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:24.733 06:01:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:24.733 06:01:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:24.733 06:01:55 -- event/cpu_locks.sh@122 -- # locks_exist 105722 00:09:24.733 06:01:55 -- event/cpu_locks.sh@22 -- # lslocks -p 105722 00:09:24.733 06:01:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:24.991 06:01:55 -- event/cpu_locks.sh@124 -- # killprocess 105722 00:09:24.991 06:01:55 -- common/autotest_common.sh@926 -- # '[' -z 105722 ']' 00:09:24.991 06:01:55 -- common/autotest_common.sh@930 -- # kill -0 105722 00:09:24.991 06:01:55 -- common/autotest_common.sh@931 -- # uname 00:09:24.991 06:01:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:24.991 06:01:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105722 00:09:24.991 06:01:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:24.991 06:01:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:24.991 06:01:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105722' 00:09:24.991 killing process with pid 105722 00:09:24.991 06:01:55 -- common/autotest_common.sh@945 -- # kill 105722 00:09:24.991 06:01:55 -- common/autotest_common.sh@950 -- # wait 105722 00:09:28.273 ************************************ 00:09:28.273 END TEST locking_app_on_locked_coremask 00:09:28.273 ************************************ 00:09:28.273 00:09:28.273 real 0m5.481s 00:09:28.273 user 0m5.713s 00:09:28.273 sys 0m1.009s 00:09:28.273 06:01:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.273 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.273 06:01:58 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:28.273 06:01:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:28.273 06:01:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:28.273 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.273 ************************************ 00:09:28.273 START TEST locking_overlapped_coremask 00:09:28.273 ************************************ 00:09:28.273 06:01:58 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:09:28.273 06:01:58 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=105828 00:09:28.273 06:01:58 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:28.273 06:01:58 -- event/cpu_locks.sh@133 -- # waitforlisten 105828 /var/tmp/spdk.sock 00:09:28.273 06:01:58 -- common/autotest_common.sh@819 -- # '[' -z 105828 ']' 00:09:28.273 06:01:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.273 06:01:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:28.273 06:01:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.273 06:01:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:28.273 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.273 [2024-06-11 06:01:58.451104] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:28.273 [2024-06-11 06:01:58.451585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105828 ] 00:09:28.273 [2024-06-11 06:01:58.644439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:28.273 [2024-06-11 06:01:58.902111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:28.273 [2024-06-11 06:01:58.902635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.273 [2024-06-11 06:01:58.902737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.273 [2024-06-11 06:01:58.902737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.649 06:02:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:29.649 06:02:00 -- common/autotest_common.sh@852 -- # return 0 00:09:29.649 06:02:00 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=105865 00:09:29.649 06:02:00 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 105865 /var/tmp/spdk2.sock 00:09:29.649 06:02:00 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:29.649 06:02:00 -- common/autotest_common.sh@640 -- # local es=0 00:09:29.649 06:02:00 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 105865 /var/tmp/spdk2.sock 00:09:29.649 06:02:00 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:29.649 06:02:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:29.649 06:02:00 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:29.649 06:02:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:29.649 06:02:00 -- common/autotest_common.sh@643 -- # waitforlisten 105865 /var/tmp/spdk2.sock 00:09:29.649 06:02:00 -- common/autotest_common.sh@819 -- # '[' -z 105865 ']' 00:09:29.649 06:02:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.649 06:02:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:29.649 06:02:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.649 06:02:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:29.649 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.649 [2024-06-11 06:02:00.185511] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:29.649 [2024-06-11 06:02:00.186136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105865 ] 00:09:29.906 [2024-06-11 06:02:00.382353] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105828 has claimed it. 00:09:29.906 [2024-06-11 06:02:00.382441] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:30.472 ERROR: process (pid: 105865) is no longer running 00:09:30.472 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (105865) - No such process 00:09:30.472 06:02:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.472 06:02:00 -- common/autotest_common.sh@852 -- # return 1 00:09:30.472 06:02:00 -- common/autotest_common.sh@643 -- # es=1 00:09:30.472 06:02:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:30.472 06:02:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:30.472 06:02:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:30.472 06:02:00 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:30.472 06:02:00 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:30.473 06:02:00 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:30.473 06:02:00 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:30.473 06:02:00 -- event/cpu_locks.sh@141 -- # killprocess 105828 00:09:30.473 06:02:00 -- common/autotest_common.sh@926 -- # '[' -z 105828 ']' 00:09:30.473 06:02:00 -- common/autotest_common.sh@930 -- # kill -0 105828 00:09:30.473 06:02:00 -- common/autotest_common.sh@931 -- # uname 00:09:30.473 06:02:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:30.473 06:02:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105828 00:09:30.473 killing process with pid 105828 00:09:30.473 06:02:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:30.473 06:02:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:30.473 06:02:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105828' 00:09:30.473 06:02:00 -- common/autotest_common.sh@945 -- # kill 105828 00:09:30.473 06:02:00 -- common/autotest_common.sh@950 -- # wait 105828 00:09:32.998 ************************************ 00:09:32.998 END TEST locking_overlapped_coremask 00:09:32.998 ************************************ 00:09:32.998 00:09:32.998 real 0m5.222s 00:09:32.998 user 0m13.778s 00:09:32.998 sys 0m0.833s 00:09:32.998 06:02:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.998 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:32.998 06:02:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:32.998 06:02:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:32.998 06:02:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.998 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 ************************************ 00:09:33.257 START TEST locking_overlapped_coremask_via_rpc 00:09:33.257 ************************************ 00:09:33.257 06:02:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:09:33.257 06:02:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=105934 00:09:33.257 06:02:03 -- event/cpu_locks.sh@149 -- # waitforlisten 105934 /var/tmp/spdk.sock 00:09:33.257 06:02:03 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:33.257 06:02:03 -- common/autotest_common.sh@819 -- # '[' -z 105934 ']' 00:09:33.257 06:02:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.257 06:02:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.257 06:02:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.257 06:02:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.257 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:33.257 [2024-06-11 06:02:03.729050] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:33.257 [2024-06-11 06:02:03.729447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105934 ] 00:09:33.516 [2024-06-11 06:02:03.903117] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:33.516 [2024-06-11 06:02:03.903405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.516 [2024-06-11 06:02:04.153108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:33.516 [2024-06-11 06:02:04.153683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.516 [2024-06-11 06:02:04.153897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.516 [2024-06-11 06:02:04.153904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:34.917 06:02:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:34.917 06:02:05 -- common/autotest_common.sh@852 -- # return 0 00:09:34.917 06:02:05 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:34.917 06:02:05 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=105970 00:09:34.917 06:02:05 -- event/cpu_locks.sh@153 -- # waitforlisten 105970 /var/tmp/spdk2.sock 00:09:34.917 06:02:05 -- common/autotest_common.sh@819 -- # '[' -z 105970 ']' 00:09:34.917 06:02:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:34.917 06:02:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.917 06:02:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:34.917 06:02:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.917 06:02:05 -- common/autotest_common.sh@10 -- # set +x 00:09:34.917 [2024-06-11 06:02:05.422768] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:34.917 [2024-06-11 06:02:05.423239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105970 ] 00:09:35.175 [2024-06-11 06:02:05.622391] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:35.175 [2024-06-11 06:02:05.622472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.775 [2024-06-11 06:02:06.139193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.775 [2024-06-11 06:02:06.139899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.775 [2024-06-11 06:02:06.152878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.775 [2024-06-11 06:02:06.152878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:38.305 06:02:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.305 06:02:08 -- common/autotest_common.sh@852 -- # return 0 00:09:38.305 06:02:08 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:38.305 06:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.305 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.305 06:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:38.305 06:02:08 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:38.305 06:02:08 -- common/autotest_common.sh@640 -- # local es=0 00:09:38.305 06:02:08 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:38.305 06:02:08 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:09:38.305 06:02:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:38.305 06:02:08 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:09:38.305 06:02:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:38.305 06:02:08 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:38.305 06:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:38.305 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.305 [2024-06-11 06:02:08.432423] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 105934 has claimed it. 00:09:38.305 request: 00:09:38.305 { 00:09:38.305 "method": "framework_enable_cpumask_locks", 00:09:38.305 "req_id": 1 00:09:38.305 } 00:09:38.305 Got JSON-RPC error response 00:09:38.305 response: 00:09:38.305 { 00:09:38.305 "code": -32603, 00:09:38.305 "message": "Failed to claim CPU core: 2" 00:09:38.305 } 00:09:38.305 06:02:08 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:09:38.305 06:02:08 -- common/autotest_common.sh@643 -- # es=1 00:09:38.305 06:02:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:38.305 06:02:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:38.305 06:02:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:38.305 06:02:08 -- event/cpu_locks.sh@158 -- # waitforlisten 105934 /var/tmp/spdk.sock 00:09:38.305 06:02:08 -- common/autotest_common.sh@819 -- # '[' -z 105934 ']' 00:09:38.305 06:02:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.305 06:02:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.305 06:02:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.305 06:02:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.305 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:38.305 06:02:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.305 06:02:08 -- common/autotest_common.sh@852 -- # return 0 00:09:38.305 06:02:08 -- event/cpu_locks.sh@159 -- # waitforlisten 105970 /var/tmp/spdk2.sock 00:09:38.305 06:02:08 -- common/autotest_common.sh@819 -- # '[' -z 105970 ']' 00:09:38.305 06:02:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:38.305 06:02:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.305 06:02:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:38.305 06:02:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.305 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.305 ************************************ 00:09:38.305 END TEST locking_overlapped_coremask_via_rpc 00:09:38.306 ************************************ 00:09:38.306 06:02:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.306 06:02:08 -- common/autotest_common.sh@852 -- # return 0 00:09:38.306 06:02:08 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:38.306 06:02:08 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:38.306 06:02:08 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:38.306 06:02:08 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:38.306 00:09:38.306 real 0m5.239s 00:09:38.306 user 0m1.793s 00:09:38.306 sys 0m0.394s 00:09:38.306 06:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.306 06:02:08 -- common/autotest_common.sh@10 -- # set +x 00:09:38.306 06:02:08 -- event/cpu_locks.sh@174 -- # cleanup 00:09:38.306 06:02:08 -- event/cpu_locks.sh@15 -- # [[ -z 105934 ]] 00:09:38.306 06:02:08 -- event/cpu_locks.sh@15 -- # killprocess 105934 00:09:38.306 06:02:08 -- common/autotest_common.sh@926 -- # '[' -z 105934 ']' 00:09:38.306 06:02:08 -- common/autotest_common.sh@930 -- # kill -0 105934 00:09:38.306 06:02:08 -- common/autotest_common.sh@931 -- # uname 00:09:38.306 06:02:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:38.306 06:02:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105934 00:09:38.563 killing process with pid 105934 00:09:38.563 06:02:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:38.563 06:02:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:38.563 06:02:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105934' 00:09:38.563 06:02:08 -- common/autotest_common.sh@945 -- # kill 105934 00:09:38.563 06:02:08 -- common/autotest_common.sh@950 -- # wait 105934 00:09:41.096 06:02:11 -- event/cpu_locks.sh@16 -- # [[ -z 105970 ]] 00:09:41.096 06:02:11 -- event/cpu_locks.sh@16 -- # killprocess 105970 00:09:41.097 06:02:11 -- common/autotest_common.sh@926 -- # '[' -z 105970 ']' 00:09:41.097 06:02:11 -- common/autotest_common.sh@930 -- # kill -0 105970 00:09:41.097 06:02:11 -- common/autotest_common.sh@931 -- # uname 00:09:41.097 06:02:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:41.097 06:02:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105970 00:09:41.383 killing process with pid 105970 00:09:41.383 06:02:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:41.383 06:02:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:41.383 06:02:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105970' 00:09:41.383 06:02:11 -- common/autotest_common.sh@945 -- # kill 105970 00:09:41.383 06:02:11 -- common/autotest_common.sh@950 -- # wait 105970 00:09:43.920 06:02:14 -- event/cpu_locks.sh@18 -- # rm -f 00:09:43.920 06:02:14 -- event/cpu_locks.sh@1 -- # cleanup 00:09:43.920 06:02:14 -- event/cpu_locks.sh@15 -- # [[ -z 105934 ]] 00:09:43.920 06:02:14 -- event/cpu_locks.sh@15 -- # killprocess 105934 00:09:43.920 Process with pid 105934 is not found 00:09:43.920 Process with pid 105970 is not found 00:09:43.920 06:02:14 -- common/autotest_common.sh@926 -- # '[' -z 105934 ']' 00:09:43.920 06:02:14 -- common/autotest_common.sh@930 -- # kill -0 105934 00:09:43.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (105934) - No such process 00:09:43.920 06:02:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 105934 is not found' 00:09:43.920 06:02:14 -- event/cpu_locks.sh@16 -- # [[ -z 105970 ]] 00:09:43.920 06:02:14 -- event/cpu_locks.sh@16 -- # killprocess 105970 00:09:43.920 06:02:14 -- common/autotest_common.sh@926 -- # '[' -z 105970 ']' 00:09:43.920 06:02:14 -- common/autotest_common.sh@930 -- # kill -0 105970 00:09:43.920 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (105970) - No such process 00:09:43.920 06:02:14 -- common/autotest_common.sh@953 -- # echo 'Process with pid 105970 is not found' 00:09:43.920 06:02:14 -- event/cpu_locks.sh@18 -- # rm -f 00:09:43.920 00:09:43.920 real 0m59.009s 00:09:43.920 user 1m39.621s 00:09:43.920 sys 0m9.569s 00:09:43.921 06:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.921 ************************************ 00:09:43.921 END TEST cpu_locks 00:09:43.921 ************************************ 00:09:43.921 06:02:14 -- common/autotest_common.sh@10 -- # set +x 00:09:43.921 ************************************ 00:09:43.921 END TEST event 00:09:43.921 ************************************ 00:09:43.921 00:09:43.921 real 1m34.175s 00:09:43.921 user 2m47.340s 00:09:43.921 sys 0m14.913s 00:09:43.921 06:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.921 06:02:14 -- common/autotest_common.sh@10 -- # set +x 00:09:43.921 06:02:14 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:43.921 06:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:43.921 06:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:43.921 06:02:14 -- common/autotest_common.sh@10 -- # set +x 00:09:43.921 ************************************ 00:09:43.921 START TEST thread 00:09:43.921 ************************************ 00:09:43.921 06:02:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:44.179 * Looking for test storage... 00:09:44.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:44.179 06:02:14 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:44.179 06:02:14 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:44.179 06:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.179 06:02:14 -- common/autotest_common.sh@10 -- # set +x 00:09:44.179 ************************************ 00:09:44.179 START TEST thread_poller_perf 00:09:44.179 ************************************ 00:09:44.179 06:02:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:44.179 [2024-06-11 06:02:14.728121] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:44.179 [2024-06-11 06:02:14.728530] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106183 ] 00:09:44.438 [2024-06-11 06:02:14.917247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.696 [2024-06-11 06:02:15.216040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.696 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:46.071 ====================================== 00:09:46.071 busy:2112234580 (cyc) 00:09:46.071 total_run_count: 370000 00:09:46.071 tsc_hz: 2100000000 (cyc) 00:09:46.071 ====================================== 00:09:46.071 poller_cost: 5708 (cyc), 2718 (nsec) 00:09:46.071 00:09:46.071 real 0m2.026s 00:09:46.071 user 0m1.763s 00:09:46.071 sys 0m0.161s 00:09:46.071 06:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.071 ************************************ 00:09:46.071 END TEST thread_poller_perf 00:09:46.071 ************************************ 00:09:46.071 06:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:46.330 06:02:16 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:46.330 06:02:16 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:09:46.330 06:02:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.330 06:02:16 -- common/autotest_common.sh@10 -- # set +x 00:09:46.330 ************************************ 00:09:46.330 START TEST thread_poller_perf 00:09:46.330 ************************************ 00:09:46.330 06:02:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:46.330 [2024-06-11 06:02:16.812014] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:46.330 [2024-06-11 06:02:16.812244] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106233 ] 00:09:46.589 [2024-06-11 06:02:16.996757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.847 [2024-06-11 06:02:17.244187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.847 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:48.262 ====================================== 00:09:48.262 busy:2104906208 (cyc) 00:09:48.262 total_run_count: 4886000 00:09:48.262 tsc_hz: 2100000000 (cyc) 00:09:48.262 ====================================== 00:09:48.262 poller_cost: 430 (cyc), 204 (nsec) 00:09:48.262 00:09:48.262 real 0m1.972s 00:09:48.262 user 0m1.710s 00:09:48.262 sys 0m0.160s 00:09:48.262 06:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.262 06:02:18 -- common/autotest_common.sh@10 -- # set +x 00:09:48.262 ************************************ 00:09:48.262 END TEST thread_poller_perf 00:09:48.262 ************************************ 00:09:48.262 06:02:18 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:48.262 06:02:18 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:48.262 06:02:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:48.262 06:02:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:48.262 06:02:18 -- common/autotest_common.sh@10 -- # set +x 00:09:48.262 ************************************ 00:09:48.262 START TEST thread_spdk_lock 00:09:48.262 ************************************ 00:09:48.262 06:02:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:48.262 [2024-06-11 06:02:18.846742] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:48.262 [2024-06-11 06:02:18.846974] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106284 ] 00:09:48.521 [2024-06-11 06:02:19.033100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.779 [2024-06-11 06:02:19.292676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.779 [2024-06-11 06:02:19.292677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.346 [2024-06-11 06:02:19.812891] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.346 [2024-06-11 06:02:19.813227] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:49.346 [2024-06-11 06:02:19.813400] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x558ef5aaba00 00:09:49.346 [2024-06-11 06:02:19.824965] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.346 [2024-06-11 06:02:19.825153] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.346 [2024-06-11 06:02:19.825280] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.914 Starting test contend 00:09:49.914 Worker Delay Wait us Hold us Total us 00:09:49.914 0 3 134167 194276 328443 00:09:49.914 1 5 67661 292950 360612 00:09:49.914 PASS test contend 00:09:49.914 Starting test hold_by_poller 00:09:49.914 PASS test hold_by_poller 00:09:49.914 Starting test hold_by_message 00:09:49.914 PASS test hold_by_message 00:09:49.914 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:49.914 100014 assertions passed 00:09:49.914 0 assertions failed 00:09:49.914 00:09:49.914 real 0m1.571s 00:09:49.914 user 0m1.828s 00:09:49.914 sys 0m0.176s 00:09:49.914 06:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.914 ************************************ 00:09:49.914 END TEST thread_spdk_lock 00:09:49.914 ************************************ 00:09:49.914 06:02:20 -- common/autotest_common.sh@10 -- # set +x 00:09:49.914 ************************************ 00:09:49.914 END TEST thread 00:09:49.914 ************************************ 00:09:49.914 00:09:49.914 real 0m5.850s 00:09:49.914 user 0m5.449s 00:09:49.914 sys 0m0.645s 00:09:49.914 06:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.914 06:02:20 -- common/autotest_common.sh@10 -- # set +x 00:09:49.914 06:02:20 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:49.914 06:02:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:49.914 06:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.914 06:02:20 -- common/autotest_common.sh@10 -- # set +x 00:09:49.914 ************************************ 00:09:49.914 START TEST accel 00:09:49.914 ************************************ 00:09:49.914 06:02:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:49.914 * Looking for test storage... 00:09:49.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:49.914 06:02:20 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:49.914 06:02:20 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:49.914 06:02:20 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:49.914 06:02:20 -- accel/accel.sh@59 -- # spdk_tgt_pid=106369 00:09:49.914 06:02:20 -- accel/accel.sh@60 -- # waitforlisten 106369 00:09:49.914 06:02:20 -- accel/accel.sh@58 -- # build_accel_config 00:09:49.914 06:02:20 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:49.914 06:02:20 -- common/autotest_common.sh@819 -- # '[' -z 106369 ']' 00:09:49.914 06:02:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.914 06:02:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.914 06:02:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.914 06:02:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:49.914 06:02:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.914 06:02:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.914 06:02:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.914 06:02:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.914 06:02:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:49.914 06:02:20 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.914 06:02:20 -- accel/accel.sh@42 -- # jq -r . 00:09:49.914 06:02:20 -- common/autotest_common.sh@10 -- # set +x 00:09:50.172 [2024-06-11 06:02:20.634153] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:50.172 [2024-06-11 06:02:20.634329] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106369 ] 00:09:50.172 [2024-06-11 06:02:20.800595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.431 [2024-06-11 06:02:21.034212] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:50.431 [2024-06-11 06:02:21.034771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.807 06:02:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:51.807 06:02:22 -- common/autotest_common.sh@852 -- # return 0 00:09:51.807 06:02:22 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:51.807 06:02:22 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:51.807 06:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:51.807 06:02:22 -- common/autotest_common.sh@10 -- # set +x 00:09:51.807 06:02:22 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:51.807 06:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:51.807 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.807 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.807 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.807 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.807 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.807 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.807 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.807 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.807 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.807 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.807 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # IFS== 00:09:51.808 06:02:22 -- accel/accel.sh@64 -- # read -r opc module 00:09:51.808 06:02:22 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:51.808 06:02:22 -- accel/accel.sh@67 -- # killprocess 106369 00:09:51.808 06:02:22 -- common/autotest_common.sh@926 -- # '[' -z 106369 ']' 00:09:51.808 06:02:22 -- common/autotest_common.sh@930 -- # kill -0 106369 00:09:51.808 06:02:22 -- common/autotest_common.sh@931 -- # uname 00:09:51.808 06:02:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:51.808 06:02:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106369 00:09:51.808 06:02:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:51.808 06:02:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:51.808 killing process with pid 106369 00:09:51.808 06:02:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106369' 00:09:51.808 06:02:22 -- common/autotest_common.sh@945 -- # kill 106369 00:09:51.808 06:02:22 -- common/autotest_common.sh@950 -- # wait 106369 00:09:55.091 06:02:25 -- accel/accel.sh@68 -- # trap - ERR 00:09:55.091 06:02:25 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:55.091 06:02:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:55.091 06:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.091 06:02:25 -- common/autotest_common.sh@10 -- # set +x 00:09:55.091 06:02:25 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:09:55.091 06:02:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:55.091 06:02:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.091 06:02:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.091 06:02:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.091 06:02:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.091 06:02:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.091 06:02:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.091 06:02:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.091 06:02:25 -- accel/accel.sh@42 -- # jq -r . 00:09:55.091 06:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.091 06:02:25 -- common/autotest_common.sh@10 -- # set +x 00:09:55.091 06:02:25 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:55.091 06:02:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:55.091 06:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:55.091 06:02:25 -- common/autotest_common.sh@10 -- # set +x 00:09:55.091 ************************************ 00:09:55.091 START TEST accel_missing_filename 00:09:55.091 ************************************ 00:09:55.091 06:02:25 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:09:55.091 06:02:25 -- common/autotest_common.sh@640 -- # local es=0 00:09:55.091 06:02:25 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:55.091 06:02:25 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:55.091 06:02:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.091 06:02:25 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:55.091 06:02:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:55.091 06:02:25 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:09:55.091 06:02:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:55.091 06:02:25 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.091 06:02:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.091 06:02:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.091 06:02:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.091 06:02:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.091 06:02:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.091 06:02:25 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.091 06:02:25 -- accel/accel.sh@42 -- # jq -r . 00:09:55.091 [2024-06-11 06:02:25.284442] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:55.091 [2024-06-11 06:02:25.284648] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106470 ] 00:09:55.091 [2024-06-11 06:02:25.468932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.091 [2024-06-11 06:02:25.716839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.350 [2024-06-11 06:02:25.963025] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.283 [2024-06-11 06:02:26.584973] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:56.542 A filename is required. 00:09:56.542 06:02:27 -- common/autotest_common.sh@643 -- # es=234 00:09:56.542 06:02:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:56.542 06:02:27 -- common/autotest_common.sh@652 -- # es=106 00:09:56.542 06:02:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:56.542 06:02:27 -- common/autotest_common.sh@660 -- # es=1 00:09:56.542 06:02:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:56.542 00:09:56.542 real 0m1.846s 00:09:56.542 user 0m1.553s 00:09:56.542 sys 0m0.236s 00:09:56.542 06:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.542 ************************************ 00:09:56.542 END TEST accel_missing_filename 00:09:56.542 ************************************ 00:09:56.542 06:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 06:02:27 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:56.542 06:02:27 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:56.542 06:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:56.542 06:02:27 -- common/autotest_common.sh@10 -- # set +x 00:09:56.542 ************************************ 00:09:56.542 START TEST accel_compress_verify 00:09:56.542 ************************************ 00:09:56.542 06:02:27 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:56.542 06:02:27 -- common/autotest_common.sh@640 -- # local es=0 00:09:56.542 06:02:27 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:56.542 06:02:27 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:56.542 06:02:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:56.542 06:02:27 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:56.542 06:02:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:56.542 06:02:27 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:56.542 06:02:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:56.543 06:02:27 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.543 06:02:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:56.543 06:02:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.543 06:02:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.543 06:02:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:56.543 06:02:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:56.543 06:02:27 -- accel/accel.sh@41 -- # local IFS=, 00:09:56.543 06:02:27 -- accel/accel.sh@42 -- # jq -r . 00:09:56.543 [2024-06-11 06:02:27.184052] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:56.543 [2024-06-11 06:02:27.184253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106519 ] 00:09:56.801 [2024-06-11 06:02:27.367500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.059 [2024-06-11 06:02:27.605660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.318 [2024-06-11 06:02:27.840257] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:57.885 [2024-06-11 06:02:28.459659] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:58.453 00:09:58.453 Compression does not support the verify option, aborting. 00:09:58.453 06:02:28 -- common/autotest_common.sh@643 -- # es=161 00:09:58.453 06:02:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:58.453 06:02:28 -- common/autotest_common.sh@652 -- # es=33 00:09:58.453 06:02:28 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:58.453 06:02:28 -- common/autotest_common.sh@660 -- # es=1 00:09:58.453 06:02:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:58.453 00:09:58.453 real 0m1.818s 00:09:58.453 user 0m1.486s 00:09:58.453 sys 0m0.275s 00:09:58.453 06:02:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.453 ************************************ 00:09:58.453 END TEST accel_compress_verify 00:09:58.453 06:02:28 -- common/autotest_common.sh@10 -- # set +x 00:09:58.453 ************************************ 00:09:58.453 06:02:28 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:58.453 06:02:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:58.453 06:02:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.453 06:02:28 -- common/autotest_common.sh@10 -- # set +x 00:09:58.453 ************************************ 00:09:58.453 START TEST accel_wrong_workload 00:09:58.453 ************************************ 00:09:58.453 06:02:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:09:58.453 06:02:29 -- common/autotest_common.sh@640 -- # local es=0 00:09:58.453 06:02:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:58.453 06:02:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:58.453 06:02:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.453 06:02:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:58.453 06:02:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.453 06:02:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:09:58.453 06:02:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:58.453 06:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.453 06:02:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.453 06:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.453 06:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.453 06:02:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.453 06:02:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.453 06:02:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.453 06:02:29 -- accel/accel.sh@42 -- # jq -r . 00:09:58.453 Unsupported workload type: foobar 00:09:58.453 [2024-06-11 06:02:29.062609] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:58.453 accel_perf options: 00:09:58.453 [-h help message] 00:09:58.453 [-q queue depth per core] 00:09:58.453 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:58.453 [-T number of threads per core 00:09:58.453 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:58.453 [-t time in seconds] 00:09:58.453 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:58.453 [ dif_verify, , dif_generate, dif_generate_copy 00:09:58.453 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:58.453 [-l for compress/decompress workloads, name of uncompressed input file 00:09:58.453 [-S for crc32c workload, use this seed value (default 0) 00:09:58.453 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:58.453 [-f for fill workload, use this BYTE value (default 255) 00:09:58.453 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:58.453 [-y verify result if this switch is on] 00:09:58.453 [-a tasks to allocate per core (default: same value as -q)] 00:09:58.453 Can be used to spread operations across a wider range of memory. 00:09:58.453 06:02:29 -- common/autotest_common.sh@643 -- # es=1 00:09:58.453 06:02:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:58.453 06:02:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:58.453 06:02:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:58.453 00:09:58.453 real 0m0.083s 00:09:58.453 user 0m0.070s 00:09:58.453 sys 0m0.061s 00:09:58.453 06:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.453 06:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:58.453 ************************************ 00:09:58.453 END TEST accel_wrong_workload 00:09:58.453 ************************************ 00:09:58.712 06:02:29 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:58.712 06:02:29 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:09:58.712 06:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.712 06:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 ************************************ 00:09:58.712 START TEST accel_negative_buffers 00:09:58.712 ************************************ 00:09:58.712 06:02:29 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:58.712 06:02:29 -- common/autotest_common.sh@640 -- # local es=0 00:09:58.712 06:02:29 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:58.712 06:02:29 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:09:58.712 06:02:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.712 06:02:29 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:09:58.712 06:02:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:58.712 06:02:29 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:09:58.712 06:02:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:58.712 06:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.712 06:02:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.712 06:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.712 06:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.712 06:02:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.712 06:02:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.712 06:02:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.712 06:02:29 -- accel/accel.sh@42 -- # jq -r . 00:09:58.712 -x option must be non-negative. 00:09:58.712 [2024-06-11 06:02:29.203306] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:58.712 accel_perf options: 00:09:58.712 [-h help message] 00:09:58.712 [-q queue depth per core] 00:09:58.712 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:58.712 [-T number of threads per core 00:09:58.712 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:58.712 [-t time in seconds] 00:09:58.712 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:58.712 [ dif_verify, , dif_generate, dif_generate_copy 00:09:58.712 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:58.712 [-l for compress/decompress workloads, name of uncompressed input file 00:09:58.712 [-S for crc32c workload, use this seed value (default 0) 00:09:58.712 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:58.712 [-f for fill workload, use this BYTE value (default 255) 00:09:58.712 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:58.712 [-y verify result if this switch is on] 00:09:58.712 [-a tasks to allocate per core (default: same value as -q)] 00:09:58.712 Can be used to spread operations across a wider range of memory. 00:09:58.712 06:02:29 -- common/autotest_common.sh@643 -- # es=1 00:09:58.712 06:02:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:58.712 06:02:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:58.712 06:02:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:58.712 00:09:58.712 real 0m0.087s 00:09:58.712 user 0m0.085s 00:09:58.712 sys 0m0.048s 00:09:58.712 06:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.712 06:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 ************************************ 00:09:58.712 END TEST accel_negative_buffers 00:09:58.712 ************************************ 00:09:58.712 06:02:29 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:58.712 06:02:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:09:58.712 06:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:58.713 06:02:29 -- common/autotest_common.sh@10 -- # set +x 00:09:58.713 ************************************ 00:09:58.713 START TEST accel_crc32c 00:09:58.713 ************************************ 00:09:58.713 06:02:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:58.713 06:02:29 -- accel/accel.sh@16 -- # local accel_opc 00:09:58.713 06:02:29 -- accel/accel.sh@17 -- # local accel_module 00:09:58.713 06:02:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:58.713 06:02:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:58.713 06:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:09:58.713 06:02:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:58.713 06:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:58.713 06:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:58.713 06:02:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:58.713 06:02:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:58.713 06:02:29 -- accel/accel.sh@41 -- # local IFS=, 00:09:58.713 06:02:29 -- accel/accel.sh@42 -- # jq -r . 00:09:58.971 [2024-06-11 06:02:29.363065] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:09:58.971 [2024-06-11 06:02:29.363464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106612 ] 00:09:58.971 [2024-06-11 06:02:29.550490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.229 [2024-06-11 06:02:29.862579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.753 06:02:32 -- accel/accel.sh@18 -- # out=' 00:10:01.753 SPDK Configuration: 00:10:01.753 Core mask: 0x1 00:10:01.753 00:10:01.753 Accel Perf Configuration: 00:10:01.753 Workload Type: crc32c 00:10:01.753 CRC-32C seed: 32 00:10:01.753 Transfer size: 4096 bytes 00:10:01.753 Vector count 1 00:10:01.753 Module: software 00:10:01.753 Queue depth: 32 00:10:01.753 Allocate depth: 32 00:10:01.753 # threads/core: 1 00:10:01.753 Run time: 1 seconds 00:10:01.753 Verify: Yes 00:10:01.753 00:10:01.753 Running for 1 seconds... 00:10:01.753 00:10:01.753 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:01.753 ------------------------------------------------------------------------------------ 00:10:01.753 0,0 444384/s 1735 MiB/s 0 0 00:10:01.753 ==================================================================================== 00:10:01.753 Total 444384/s 1735 MiB/s 0 0' 00:10:01.753 06:02:32 -- accel/accel.sh@20 -- # IFS=: 00:10:01.753 06:02:32 -- accel/accel.sh@20 -- # read -r var val 00:10:01.753 06:02:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:01.753 06:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.753 06:02:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:01.753 06:02:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.753 06:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.753 06:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.753 06:02:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.753 06:02:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.753 06:02:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.753 06:02:32 -- accel/accel.sh@42 -- # jq -r . 00:10:01.753 [2024-06-11 06:02:32.346021] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:01.753 [2024-06-11 06:02:32.346256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106659 ] 00:10:02.010 [2024-06-11 06:02:32.525240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.267 [2024-06-11 06:02:32.840955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=0x1 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=crc32c 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=32 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=software 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=32 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=32 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=1 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val=Yes 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:02.524 06:02:33 -- accel/accel.sh@21 -- # val= 00:10:02.524 06:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # IFS=: 00:10:02.524 06:02:33 -- accel/accel.sh@20 -- # read -r var val 00:10:05.048 06:02:35 -- accel/accel.sh@21 -- # val= 00:10:05.048 06:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # IFS=: 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # read -r var val 00:10:05.048 06:02:35 -- accel/accel.sh@21 -- # val= 00:10:05.048 06:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # IFS=: 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # read -r var val 00:10:05.048 06:02:35 -- accel/accel.sh@21 -- # val= 00:10:05.048 06:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # IFS=: 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # read -r var val 00:10:05.048 06:02:35 -- accel/accel.sh@21 -- # val= 00:10:05.048 06:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.048 06:02:35 -- accel/accel.sh@20 -- # IFS=: 00:10:05.049 06:02:35 -- accel/accel.sh@20 -- # read -r var val 00:10:05.049 06:02:35 -- accel/accel.sh@21 -- # val= 00:10:05.049 06:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.049 06:02:35 -- accel/accel.sh@20 -- # IFS=: 00:10:05.049 06:02:35 -- accel/accel.sh@20 -- # read -r var val 00:10:05.049 06:02:35 -- accel/accel.sh@21 -- # val= 00:10:05.049 06:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:05.049 06:02:35 -- accel/accel.sh@20 -- # IFS=: 00:10:05.049 06:02:35 -- accel/accel.sh@20 -- # read -r var val 00:10:05.049 06:02:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:05.049 06:02:35 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:05.049 06:02:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:05.049 00:10:05.049 real 0m5.991s 00:10:05.049 user 0m5.237s 00:10:05.049 sys 0m0.563s 00:10:05.049 06:02:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:05.049 06:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:05.049 ************************************ 00:10:05.049 END TEST accel_crc32c 00:10:05.049 ************************************ 00:10:05.049 06:02:35 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:05.049 06:02:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:05.049 06:02:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:05.049 06:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:05.049 ************************************ 00:10:05.049 START TEST accel_crc32c_C2 00:10:05.049 ************************************ 00:10:05.049 06:02:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:05.049 06:02:35 -- accel/accel.sh@16 -- # local accel_opc 00:10:05.049 06:02:35 -- accel/accel.sh@17 -- # local accel_module 00:10:05.049 06:02:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:05.049 06:02:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:05.049 06:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:05.049 06:02:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:05.049 06:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:05.049 06:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:05.049 06:02:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:05.049 06:02:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:05.049 06:02:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:05.049 06:02:35 -- accel/accel.sh@42 -- # jq -r . 00:10:05.049 [2024-06-11 06:02:35.388817] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:05.049 [2024-06-11 06:02:35.389006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106716 ] 00:10:05.049 [2024-06-11 06:02:35.555105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.306 [2024-06-11 06:02:35.840349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.829 06:02:38 -- accel/accel.sh@18 -- # out=' 00:10:07.829 SPDK Configuration: 00:10:07.829 Core mask: 0x1 00:10:07.829 00:10:07.829 Accel Perf Configuration: 00:10:07.829 Workload Type: crc32c 00:10:07.829 CRC-32C seed: 0 00:10:07.829 Transfer size: 4096 bytes 00:10:07.829 Vector count 2 00:10:07.829 Module: software 00:10:07.829 Queue depth: 32 00:10:07.829 Allocate depth: 32 00:10:07.829 # threads/core: 1 00:10:07.829 Run time: 1 seconds 00:10:07.829 Verify: Yes 00:10:07.829 00:10:07.829 Running for 1 seconds... 00:10:07.829 00:10:07.829 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:07.830 ------------------------------------------------------------------------------------ 00:10:07.830 0,0 375328/s 2932 MiB/s 0 0 00:10:07.830 ==================================================================================== 00:10:07.830 Total 375328/s 1466 MiB/s 0 0' 00:10:07.830 06:02:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:07.830 06:02:38 -- accel/accel.sh@20 -- # IFS=: 00:10:07.830 06:02:38 -- accel/accel.sh@20 -- # read -r var val 00:10:07.830 06:02:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:07.830 06:02:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:07.830 06:02:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:07.830 06:02:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.830 06:02:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.830 06:02:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:07.830 06:02:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:07.830 06:02:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:07.830 06:02:38 -- accel/accel.sh@42 -- # jq -r . 00:10:07.830 [2024-06-11 06:02:38.338319] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:07.830 [2024-06-11 06:02:38.338625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106765 ] 00:10:08.087 [2024-06-11 06:02:38.525938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.345 [2024-06-11 06:02:38.827652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val=0x1 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val=crc32c 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val=0 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.602 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.602 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.602 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val=software 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@23 -- # accel_module=software 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val=32 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val=32 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val=1 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val=Yes 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:08.603 06:02:39 -- accel/accel.sh@21 -- # val= 00:10:08.603 06:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # IFS=: 00:10:08.603 06:02:39 -- accel/accel.sh@20 -- # read -r var val 00:10:11.148 06:02:41 -- accel/accel.sh@21 -- # val= 00:10:11.149 06:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # IFS=: 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # read -r var val 00:10:11.149 06:02:41 -- accel/accel.sh@21 -- # val= 00:10:11.149 06:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # IFS=: 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # read -r var val 00:10:11.149 06:02:41 -- accel/accel.sh@21 -- # val= 00:10:11.149 06:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # IFS=: 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # read -r var val 00:10:11.149 06:02:41 -- accel/accel.sh@21 -- # val= 00:10:11.149 06:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # IFS=: 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # read -r var val 00:10:11.149 06:02:41 -- accel/accel.sh@21 -- # val= 00:10:11.149 06:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # IFS=: 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # read -r var val 00:10:11.149 06:02:41 -- accel/accel.sh@21 -- # val= 00:10:11.149 06:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # IFS=: 00:10:11.149 06:02:41 -- accel/accel.sh@20 -- # read -r var val 00:10:11.149 06:02:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:11.149 06:02:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:11.149 06:02:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:11.149 00:10:11.149 real 0m5.920s 00:10:11.149 user 0m5.255s 00:10:11.149 sys 0m0.499s 00:10:11.149 06:02:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.149 ************************************ 00:10:11.149 END TEST accel_crc32c_C2 00:10:11.149 06:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:11.149 ************************************ 00:10:11.149 06:02:41 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:11.149 06:02:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:11.149 06:02:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.149 06:02:41 -- common/autotest_common.sh@10 -- # set +x 00:10:11.149 ************************************ 00:10:11.149 START TEST accel_copy 00:10:11.149 ************************************ 00:10:11.149 06:02:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:11.149 06:02:41 -- accel/accel.sh@16 -- # local accel_opc 00:10:11.149 06:02:41 -- accel/accel.sh@17 -- # local accel_module 00:10:11.149 06:02:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:11.149 06:02:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:11.149 06:02:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.149 06:02:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.149 06:02:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.149 06:02:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.149 06:02:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.149 06:02:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.149 06:02:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.149 06:02:41 -- accel/accel.sh@42 -- # jq -r . 00:10:11.149 [2024-06-11 06:02:41.368908] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:11.149 [2024-06-11 06:02:41.369076] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106818 ] 00:10:11.149 [2024-06-11 06:02:41.538479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.406 [2024-06-11 06:02:41.794870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.931 06:02:44 -- accel/accel.sh@18 -- # out=' 00:10:13.931 SPDK Configuration: 00:10:13.931 Core mask: 0x1 00:10:13.931 00:10:13.931 Accel Perf Configuration: 00:10:13.931 Workload Type: copy 00:10:13.931 Transfer size: 4096 bytes 00:10:13.931 Vector count 1 00:10:13.931 Module: software 00:10:13.931 Queue depth: 32 00:10:13.931 Allocate depth: 32 00:10:13.931 # threads/core: 1 00:10:13.931 Run time: 1 seconds 00:10:13.931 Verify: Yes 00:10:13.931 00:10:13.931 Running for 1 seconds... 00:10:13.931 00:10:13.931 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:13.931 ------------------------------------------------------------------------------------ 00:10:13.931 0,0 305792/s 1194 MiB/s 0 0 00:10:13.931 ==================================================================================== 00:10:13.931 Total 305792/s 1194 MiB/s 0 0' 00:10:13.931 06:02:44 -- accel/accel.sh@20 -- # IFS=: 00:10:13.931 06:02:44 -- accel/accel.sh@20 -- # read -r var val 00:10:13.931 06:02:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:13.931 06:02:44 -- accel/accel.sh@12 -- # build_accel_config 00:10:13.931 06:02:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:13.931 06:02:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:13.931 06:02:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:13.931 06:02:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.931 06:02:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:13.931 06:02:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:13.931 06:02:44 -- accel/accel.sh@41 -- # local IFS=, 00:10:13.931 06:02:44 -- accel/accel.sh@42 -- # jq -r . 00:10:13.931 [2024-06-11 06:02:44.294742] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:13.931 [2024-06-11 06:02:44.295012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106865 ] 00:10:13.931 [2024-06-11 06:02:44.484859] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.188 [2024-06-11 06:02:44.748657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.446 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.446 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.446 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.446 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.446 06:02:45 -- accel/accel.sh@21 -- # val=0x1 00:10:14.446 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.446 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.446 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.446 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.446 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val=copy 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val=software 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@23 -- # accel_module=software 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val=32 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val=32 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val=1 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val=Yes 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:14.447 06:02:45 -- accel/accel.sh@21 -- # val= 00:10:14.447 06:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # IFS=: 00:10:14.447 06:02:45 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@21 -- # val= 00:10:16.969 06:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # IFS=: 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@21 -- # val= 00:10:16.969 06:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # IFS=: 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@21 -- # val= 00:10:16.969 06:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # IFS=: 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@21 -- # val= 00:10:16.969 06:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # IFS=: 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@21 -- # val= 00:10:16.969 06:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # IFS=: 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@21 -- # val= 00:10:16.969 06:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # IFS=: 00:10:16.969 06:02:47 -- accel/accel.sh@20 -- # read -r var val 00:10:16.969 06:02:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:16.969 06:02:47 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:16.969 06:02:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:16.969 00:10:16.969 real 0m5.894s 00:10:16.969 user 0m5.153s 00:10:16.969 sys 0m0.566s 00:10:16.969 06:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.969 ************************************ 00:10:16.969 END TEST accel_copy 00:10:16.969 ************************************ 00:10:16.969 06:02:47 -- common/autotest_common.sh@10 -- # set +x 00:10:16.969 06:02:47 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:16.969 06:02:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:16.969 06:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:16.969 06:02:47 -- common/autotest_common.sh@10 -- # set +x 00:10:16.969 ************************************ 00:10:16.969 START TEST accel_fill 00:10:16.969 ************************************ 00:10:16.969 06:02:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:16.969 06:02:47 -- accel/accel.sh@16 -- # local accel_opc 00:10:16.969 06:02:47 -- accel/accel.sh@17 -- # local accel_module 00:10:16.969 06:02:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:16.969 06:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.969 06:02:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:16.969 06:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.969 06:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.969 06:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.969 06:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.969 06:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.969 06:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.969 06:02:47 -- accel/accel.sh@42 -- # jq -r . 00:10:16.969 [2024-06-11 06:02:47.332277] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:16.969 [2024-06-11 06:02:47.332498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106922 ] 00:10:16.969 [2024-06-11 06:02:47.517663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.226 [2024-06-11 06:02:47.769243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.753 06:02:50 -- accel/accel.sh@18 -- # out=' 00:10:19.753 SPDK Configuration: 00:10:19.753 Core mask: 0x1 00:10:19.753 00:10:19.753 Accel Perf Configuration: 00:10:19.753 Workload Type: fill 00:10:19.753 Fill pattern: 0x80 00:10:19.753 Transfer size: 4096 bytes 00:10:19.753 Vector count 1 00:10:19.753 Module: software 00:10:19.753 Queue depth: 64 00:10:19.753 Allocate depth: 64 00:10:19.753 # threads/core: 1 00:10:19.753 Run time: 1 seconds 00:10:19.753 Verify: Yes 00:10:19.753 00:10:19.753 Running for 1 seconds... 00:10:19.753 00:10:19.753 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:19.753 ------------------------------------------------------------------------------------ 00:10:19.754 0,0 493056/s 1926 MiB/s 0 0 00:10:19.754 ==================================================================================== 00:10:19.754 Total 493056/s 1926 MiB/s 0 0' 00:10:19.754 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:19.754 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:19.754 06:02:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:19.754 06:02:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:19.754 06:02:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.754 06:02:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.754 06:02:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.754 06:02:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.754 06:02:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.754 06:02:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.754 06:02:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.754 06:02:50 -- accel/accel.sh@42 -- # jq -r . 00:10:19.754 [2024-06-11 06:02:50.259390] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:19.754 [2024-06-11 06:02:50.259546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106966 ] 00:10:20.011 [2024-06-11 06:02:50.430778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.269 [2024-06-11 06:02:50.705499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.527 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.527 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.527 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.527 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.527 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.527 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.527 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=0x1 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=fill 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=0x80 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=software 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@23 -- # accel_module=software 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=64 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=64 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=1 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val=Yes 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:20.528 06:02:50 -- accel/accel.sh@21 -- # val= 00:10:20.528 06:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # IFS=: 00:10:20.528 06:02:50 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@21 -- # val= 00:10:23.056 06:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@21 -- # val= 00:10:23.056 06:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@21 -- # val= 00:10:23.056 06:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@21 -- # val= 00:10:23.056 06:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@21 -- # val= 00:10:23.056 06:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@21 -- # val= 00:10:23.056 06:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # IFS=: 00:10:23.056 06:02:53 -- accel/accel.sh@20 -- # read -r var val 00:10:23.056 06:02:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:23.056 06:02:53 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:23.056 06:02:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.056 00:10:23.056 real 0m5.847s 00:10:23.056 user 0m5.140s 00:10:23.056 sys 0m0.521s 00:10:23.056 06:02:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.056 ************************************ 00:10:23.056 06:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.056 END TEST accel_fill 00:10:23.056 ************************************ 00:10:23.056 06:02:53 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:23.056 06:02:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:23.056 06:02:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:23.056 06:02:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.056 ************************************ 00:10:23.056 START TEST accel_copy_crc32c 00:10:23.056 ************************************ 00:10:23.056 06:02:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:23.056 06:02:53 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.056 06:02:53 -- accel/accel.sh@17 -- # local accel_module 00:10:23.056 06:02:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:23.056 06:02:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:23.056 06:02:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.056 06:02:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.056 06:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.056 06:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.056 06:02:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.056 06:02:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.056 06:02:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.056 06:02:53 -- accel/accel.sh@42 -- # jq -r . 00:10:23.056 [2024-06-11 06:02:53.235598] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:23.056 [2024-06-11 06:02:53.235812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107023 ] 00:10:23.056 [2024-06-11 06:02:53.418828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.056 [2024-06-11 06:02:53.677216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.584 06:02:56 -- accel/accel.sh@18 -- # out=' 00:10:25.584 SPDK Configuration: 00:10:25.584 Core mask: 0x1 00:10:25.584 00:10:25.584 Accel Perf Configuration: 00:10:25.584 Workload Type: copy_crc32c 00:10:25.584 CRC-32C seed: 0 00:10:25.584 Vector size: 4096 bytes 00:10:25.584 Transfer size: 4096 bytes 00:10:25.584 Vector count 1 00:10:25.584 Module: software 00:10:25.584 Queue depth: 32 00:10:25.584 Allocate depth: 32 00:10:25.584 # threads/core: 1 00:10:25.584 Run time: 1 seconds 00:10:25.584 Verify: Yes 00:10:25.584 00:10:25.584 Running for 1 seconds... 00:10:25.584 00:10:25.584 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:25.584 ------------------------------------------------------------------------------------ 00:10:25.584 0,0 244032/s 953 MiB/s 0 0 00:10:25.584 ==================================================================================== 00:10:25.584 Total 244032/s 953 MiB/s 0 0' 00:10:25.584 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:25.584 06:02:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:25.584 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:25.584 06:02:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:25.584 06:02:56 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.584 06:02:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.584 06:02:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.584 06:02:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.584 06:02:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.584 06:02:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.584 06:02:56 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.584 06:02:56 -- accel/accel.sh@42 -- # jq -r . 00:10:25.584 [2024-06-11 06:02:56.175675] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:25.584 [2024-06-11 06:02:56.175875] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107063 ] 00:10:25.841 [2024-06-11 06:02:56.359163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.098 [2024-06-11 06:02:56.632934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=0x1 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=0 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=software 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@23 -- # accel_module=software 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=32 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=32 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=1 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val=Yes 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:26.355 06:02:56 -- accel/accel.sh@21 -- # val= 00:10:26.355 06:02:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # IFS=: 00:10:26.355 06:02:56 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@21 -- # val= 00:10:28.882 06:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # IFS=: 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@21 -- # val= 00:10:28.882 06:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # IFS=: 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@21 -- # val= 00:10:28.882 06:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # IFS=: 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@21 -- # val= 00:10:28.882 06:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # IFS=: 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@21 -- # val= 00:10:28.882 06:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # IFS=: 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@21 -- # val= 00:10:28.882 06:02:59 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # IFS=: 00:10:28.882 06:02:59 -- accel/accel.sh@20 -- # read -r var val 00:10:28.882 06:02:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:28.882 06:02:59 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:28.882 06:02:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.882 00:10:28.882 real 0m5.904s 00:10:28.882 user 0m5.182s 00:10:28.882 sys 0m0.551s 00:10:28.882 06:02:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.882 06:02:59 -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 ************************************ 00:10:28.882 END TEST accel_copy_crc32c 00:10:28.882 ************************************ 00:10:28.882 06:02:59 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:28.882 06:02:59 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:28.882 06:02:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:28.882 06:02:59 -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 ************************************ 00:10:28.882 START TEST accel_copy_crc32c_C2 00:10:28.882 ************************************ 00:10:28.882 06:02:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:28.882 06:02:59 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.882 06:02:59 -- accel/accel.sh@17 -- # local accel_module 00:10:28.882 06:02:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:28.882 06:02:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:28.882 06:02:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.882 06:02:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.882 06:02:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.882 06:02:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.882 06:02:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.882 06:02:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.882 06:02:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.882 06:02:59 -- accel/accel.sh@42 -- # jq -r . 00:10:28.882 [2024-06-11 06:02:59.193035] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:28.882 [2024-06-11 06:02:59.193255] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107123 ] 00:10:28.882 [2024-06-11 06:02:59.378382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.140 [2024-06-11 06:02:59.635520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.682 06:03:01 -- accel/accel.sh@18 -- # out=' 00:10:31.682 SPDK Configuration: 00:10:31.682 Core mask: 0x1 00:10:31.682 00:10:31.682 Accel Perf Configuration: 00:10:31.682 Workload Type: copy_crc32c 00:10:31.682 CRC-32C seed: 0 00:10:31.682 Vector size: 4096 bytes 00:10:31.682 Transfer size: 8192 bytes 00:10:31.682 Vector count 2 00:10:31.682 Module: software 00:10:31.682 Queue depth: 32 00:10:31.682 Allocate depth: 32 00:10:31.682 # threads/core: 1 00:10:31.682 Run time: 1 seconds 00:10:31.682 Verify: Yes 00:10:31.682 00:10:31.682 Running for 1 seconds... 00:10:31.682 00:10:31.682 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:31.682 ------------------------------------------------------------------------------------ 00:10:31.682 0,0 174272/s 1361 MiB/s 0 0 00:10:31.682 ==================================================================================== 00:10:31.682 Total 174272/s 680 MiB/s 0 0' 00:10:31.682 06:03:01 -- accel/accel.sh@20 -- # IFS=: 00:10:31.682 06:03:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:31.682 06:03:01 -- accel/accel.sh@20 -- # read -r var val 00:10:31.682 06:03:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:31.682 06:03:01 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.682 06:03:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.682 06:03:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.682 06:03:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.682 06:03:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.682 06:03:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.682 06:03:01 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.682 06:03:01 -- accel/accel.sh@42 -- # jq -r . 00:10:31.682 [2024-06-11 06:03:02.035669] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:31.682 [2024-06-11 06:03:02.035805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107164 ] 00:10:31.682 [2024-06-11 06:03:02.201425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.940 [2024-06-11 06:03:02.477906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=0x1 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=0 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=software 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@23 -- # accel_module=software 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=32 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=32 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=1 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val=Yes 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:32.199 06:03:02 -- accel/accel.sh@21 -- # val= 00:10:32.199 06:03:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # IFS=: 00:10:32.199 06:03:02 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@21 -- # val= 00:10:34.732 06:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # IFS=: 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@21 -- # val= 00:10:34.732 06:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # IFS=: 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@21 -- # val= 00:10:34.732 06:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # IFS=: 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@21 -- # val= 00:10:34.732 06:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # IFS=: 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@21 -- # val= 00:10:34.732 06:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # IFS=: 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@21 -- # val= 00:10:34.732 06:03:04 -- accel/accel.sh@22 -- # case "$var" in 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # IFS=: 00:10:34.732 06:03:04 -- accel/accel.sh@20 -- # read -r var val 00:10:34.732 06:03:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:34.732 06:03:04 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:34.732 06:03:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:34.732 00:10:34.732 real 0m5.689s 00:10:34.732 user 0m4.946s 00:10:34.732 sys 0m0.572s 00:10:34.732 06:03:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.732 06:03:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.732 ************************************ 00:10:34.732 END TEST accel_copy_crc32c_C2 00:10:34.732 ************************************ 00:10:34.732 06:03:04 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:34.732 06:03:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:34.732 06:03:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:34.732 06:03:04 -- common/autotest_common.sh@10 -- # set +x 00:10:34.732 ************************************ 00:10:34.732 START TEST accel_dualcast 00:10:34.732 ************************************ 00:10:34.732 06:03:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:10:34.732 06:03:04 -- accel/accel.sh@16 -- # local accel_opc 00:10:34.732 06:03:04 -- accel/accel.sh@17 -- # local accel_module 00:10:34.732 06:03:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:34.732 06:03:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:34.732 06:03:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.732 06:03:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.732 06:03:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.732 06:03:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.732 06:03:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.732 06:03:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.732 06:03:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.733 06:03:04 -- accel/accel.sh@42 -- # jq -r . 00:10:34.733 [2024-06-11 06:03:04.937572] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:34.733 [2024-06-11 06:03:04.938328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107221 ] 00:10:34.733 [2024-06-11 06:03:05.121215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.733 [2024-06-11 06:03:05.367044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.264 06:03:07 -- accel/accel.sh@18 -- # out=' 00:10:37.264 SPDK Configuration: 00:10:37.264 Core mask: 0x1 00:10:37.264 00:10:37.264 Accel Perf Configuration: 00:10:37.264 Workload Type: dualcast 00:10:37.264 Transfer size: 4096 bytes 00:10:37.264 Vector count 1 00:10:37.264 Module: software 00:10:37.264 Queue depth: 32 00:10:37.264 Allocate depth: 32 00:10:37.264 # threads/core: 1 00:10:37.264 Run time: 1 seconds 00:10:37.264 Verify: Yes 00:10:37.264 00:10:37.264 Running for 1 seconds... 00:10:37.264 00:10:37.264 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:37.264 ------------------------------------------------------------------------------------ 00:10:37.264 0,0 378880/s 1480 MiB/s 0 0 00:10:37.264 ==================================================================================== 00:10:37.264 Total 378880/s 1480 MiB/s 0 0' 00:10:37.264 06:03:07 -- accel/accel.sh@20 -- # IFS=: 00:10:37.264 06:03:07 -- accel/accel.sh@20 -- # read -r var val 00:10:37.264 06:03:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:37.264 06:03:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.264 06:03:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:37.264 06:03:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.264 06:03:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.264 06:03:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.264 06:03:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.264 06:03:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.264 06:03:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.264 06:03:07 -- accel/accel.sh@42 -- # jq -r . 00:10:37.264 [2024-06-11 06:03:07.765423] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:37.264 [2024-06-11 06:03:07.765906] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107263 ] 00:10:37.522 [2024-06-11 06:03:07.949783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.780 [2024-06-11 06:03:08.223866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=0x1 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=dualcast 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=software 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@23 -- # accel_module=software 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=32 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=32 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=1 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:38.038 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.038 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.038 06:03:08 -- accel/accel.sh@21 -- # val=Yes 00:10:38.039 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.039 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.039 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.039 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.039 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.039 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.039 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:38.039 06:03:08 -- accel/accel.sh@21 -- # val= 00:10:38.039 06:03:08 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.039 06:03:08 -- accel/accel.sh@20 -- # IFS=: 00:10:38.039 06:03:08 -- accel/accel.sh@20 -- # read -r var val 00:10:39.941 06:03:10 -- accel/accel.sh@21 -- # val= 00:10:39.941 06:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # IFS=: 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # read -r var val 00:10:39.941 06:03:10 -- accel/accel.sh@21 -- # val= 00:10:39.941 06:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # IFS=: 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # read -r var val 00:10:39.941 06:03:10 -- accel/accel.sh@21 -- # val= 00:10:39.941 06:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # IFS=: 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # read -r var val 00:10:39.941 06:03:10 -- accel/accel.sh@21 -- # val= 00:10:39.941 06:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # IFS=: 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # read -r var val 00:10:39.941 06:03:10 -- accel/accel.sh@21 -- # val= 00:10:39.941 06:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # IFS=: 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # read -r var val 00:10:39.941 06:03:10 -- accel/accel.sh@21 -- # val= 00:10:39.941 06:03:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # IFS=: 00:10:39.941 06:03:10 -- accel/accel.sh@20 -- # read -r var val 00:10:40.200 06:03:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:40.200 06:03:10 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:40.200 06:03:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:40.200 00:10:40.200 real 0m5.698s 00:10:40.200 user 0m4.981s 00:10:40.200 sys 0m0.528s 00:10:40.200 06:03:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.200 ************************************ 00:10:40.200 END TEST accel_dualcast 00:10:40.200 06:03:10 -- common/autotest_common.sh@10 -- # set +x 00:10:40.200 ************************************ 00:10:40.200 06:03:10 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:40.200 06:03:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:40.200 06:03:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:40.200 06:03:10 -- common/autotest_common.sh@10 -- # set +x 00:10:40.200 ************************************ 00:10:40.200 START TEST accel_compare 00:10:40.200 ************************************ 00:10:40.200 06:03:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:10:40.200 06:03:10 -- accel/accel.sh@16 -- # local accel_opc 00:10:40.200 06:03:10 -- accel/accel.sh@17 -- # local accel_module 00:10:40.200 06:03:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:40.200 06:03:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:40.200 06:03:10 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.200 06:03:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.200 06:03:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.200 06:03:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.200 06:03:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.200 06:03:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.200 06:03:10 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.200 06:03:10 -- accel/accel.sh@42 -- # jq -r . 00:10:40.200 [2024-06-11 06:03:10.707439] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:40.200 [2024-06-11 06:03:10.707646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107322 ] 00:10:40.459 [2024-06-11 06:03:10.889790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.718 [2024-06-11 06:03:11.133998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.275 06:03:13 -- accel/accel.sh@18 -- # out=' 00:10:43.275 SPDK Configuration: 00:10:43.275 Core mask: 0x1 00:10:43.276 00:10:43.276 Accel Perf Configuration: 00:10:43.276 Workload Type: compare 00:10:43.276 Transfer size: 4096 bytes 00:10:43.276 Vector count 1 00:10:43.276 Module: software 00:10:43.276 Queue depth: 32 00:10:43.276 Allocate depth: 32 00:10:43.276 # threads/core: 1 00:10:43.276 Run time: 1 seconds 00:10:43.276 Verify: Yes 00:10:43.276 00:10:43.276 Running for 1 seconds... 00:10:43.276 00:10:43.276 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:43.276 ------------------------------------------------------------------------------------ 00:10:43.276 0,0 516448/s 2017 MiB/s 0 0 00:10:43.276 ==================================================================================== 00:10:43.276 Total 516448/s 2017 MiB/s 0 0' 00:10:43.276 06:03:13 -- accel/accel.sh@20 -- # IFS=: 00:10:43.276 06:03:13 -- accel/accel.sh@20 -- # read -r var val 00:10:43.276 06:03:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:43.276 06:03:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.276 06:03:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:43.276 06:03:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.276 06:03:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.276 06:03:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.276 06:03:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.276 06:03:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.276 06:03:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.276 06:03:13 -- accel/accel.sh@42 -- # jq -r . 00:10:43.276 [2024-06-11 06:03:13.531501] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:43.276 [2024-06-11 06:03:13.532350] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107361 ] 00:10:43.276 [2024-06-11 06:03:13.712358] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.533 [2024-06-11 06:03:13.974384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=0x1 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=compare 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=software 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@23 -- # accel_module=software 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=32 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=32 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=1 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val=Yes 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:43.792 06:03:14 -- accel/accel.sh@21 -- # val= 00:10:43.792 06:03:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # IFS=: 00:10:43.792 06:03:14 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@21 -- # val= 00:10:45.691 06:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # IFS=: 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@21 -- # val= 00:10:45.691 06:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # IFS=: 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@21 -- # val= 00:10:45.691 06:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # IFS=: 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@21 -- # val= 00:10:45.691 06:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # IFS=: 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@21 -- # val= 00:10:45.691 06:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # IFS=: 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@21 -- # val= 00:10:45.691 06:03:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # IFS=: 00:10:45.691 06:03:16 -- accel/accel.sh@20 -- # read -r var val 00:10:45.691 06:03:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:45.691 06:03:16 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:45.691 06:03:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:45.691 00:10:45.691 real 0m5.663s 00:10:45.691 user 0m4.979s 00:10:45.691 sys 0m0.521s 00:10:45.691 ************************************ 00:10:45.691 END TEST accel_compare 00:10:45.691 ************************************ 00:10:45.691 06:03:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:45.691 06:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.949 06:03:16 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:45.949 06:03:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:45.949 06:03:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:45.949 06:03:16 -- common/autotest_common.sh@10 -- # set +x 00:10:45.949 ************************************ 00:10:45.949 START TEST accel_xor 00:10:45.949 ************************************ 00:10:45.949 06:03:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:10:45.949 06:03:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:45.949 06:03:16 -- accel/accel.sh@17 -- # local accel_module 00:10:45.949 06:03:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:45.949 06:03:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:45.949 06:03:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.949 06:03:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.949 06:03:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.949 06:03:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.949 06:03:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.949 06:03:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.949 06:03:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.949 06:03:16 -- accel/accel.sh@42 -- # jq -r . 00:10:45.949 [2024-06-11 06:03:16.436659] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:45.949 [2024-06-11 06:03:16.436889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107414 ] 00:10:46.207 [2024-06-11 06:03:16.620937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.464 [2024-06-11 06:03:16.881649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.025 06:03:19 -- accel/accel.sh@18 -- # out=' 00:10:49.025 SPDK Configuration: 00:10:49.025 Core mask: 0x1 00:10:49.025 00:10:49.025 Accel Perf Configuration: 00:10:49.025 Workload Type: xor 00:10:49.025 Source buffers: 2 00:10:49.025 Transfer size: 4096 bytes 00:10:49.025 Vector count 1 00:10:49.025 Module: software 00:10:49.025 Queue depth: 32 00:10:49.025 Allocate depth: 32 00:10:49.025 # threads/core: 1 00:10:49.025 Run time: 1 seconds 00:10:49.025 Verify: Yes 00:10:49.025 00:10:49.025 Running for 1 seconds... 00:10:49.025 00:10:49.025 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.025 ------------------------------------------------------------------------------------ 00:10:49.025 0,0 335968/s 1312 MiB/s 0 0 00:10:49.025 ==================================================================================== 00:10:49.025 Total 335968/s 1312 MiB/s 0 0' 00:10:49.025 06:03:19 -- accel/accel.sh@20 -- # IFS=: 00:10:49.025 06:03:19 -- accel/accel.sh@20 -- # read -r var val 00:10:49.025 06:03:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:49.025 06:03:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.025 06:03:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:49.025 06:03:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.025 06:03:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.025 06:03:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.025 06:03:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.025 06:03:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.025 06:03:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.025 06:03:19 -- accel/accel.sh@42 -- # jq -r . 00:10:49.025 [2024-06-11 06:03:19.313251] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:49.025 [2024-06-11 06:03:19.313486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107463 ] 00:10:49.025 [2024-06-11 06:03:19.502036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.283 [2024-06-11 06:03:19.813295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=0x1 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=xor 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=2 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=software 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=32 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=32 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=1 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val=Yes 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:49.541 06:03:20 -- accel/accel.sh@21 -- # val= 00:10:49.541 06:03:20 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # IFS=: 00:10:49.541 06:03:20 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@21 -- # val= 00:10:52.071 06:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # IFS=: 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@21 -- # val= 00:10:52.071 06:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # IFS=: 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@21 -- # val= 00:10:52.071 06:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # IFS=: 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@21 -- # val= 00:10:52.071 06:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # IFS=: 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@21 -- # val= 00:10:52.071 06:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # IFS=: 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@21 -- # val= 00:10:52.071 06:03:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # IFS=: 00:10:52.071 06:03:22 -- accel/accel.sh@20 -- # read -r var val 00:10:52.071 06:03:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:52.071 06:03:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:52.071 06:03:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:52.071 00:10:52.071 real 0m5.942s 00:10:52.071 user 0m5.210s 00:10:52.071 sys 0m0.572s 00:10:52.071 06:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.071 06:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.071 ************************************ 00:10:52.071 END TEST accel_xor 00:10:52.071 ************************************ 00:10:52.071 06:03:22 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:52.071 06:03:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:52.071 06:03:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:52.071 06:03:22 -- common/autotest_common.sh@10 -- # set +x 00:10:52.071 ************************************ 00:10:52.071 START TEST accel_xor 00:10:52.071 ************************************ 00:10:52.071 06:03:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:10:52.071 06:03:22 -- accel/accel.sh@16 -- # local accel_opc 00:10:52.071 06:03:22 -- accel/accel.sh@17 -- # local accel_module 00:10:52.071 06:03:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:52.071 06:03:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:52.071 06:03:22 -- accel/accel.sh@12 -- # build_accel_config 00:10:52.071 06:03:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:52.071 06:03:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:52.071 06:03:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:52.071 06:03:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:52.071 06:03:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:52.071 06:03:22 -- accel/accel.sh@41 -- # local IFS=, 00:10:52.071 06:03:22 -- accel/accel.sh@42 -- # jq -r . 00:10:52.071 [2024-06-11 06:03:22.442534] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:52.071 [2024-06-11 06:03:22.442769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107521 ] 00:10:52.071 [2024-06-11 06:03:22.629055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.329 [2024-06-11 06:03:22.903779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.862 06:03:25 -- accel/accel.sh@18 -- # out=' 00:10:54.862 SPDK Configuration: 00:10:54.862 Core mask: 0x1 00:10:54.862 00:10:54.862 Accel Perf Configuration: 00:10:54.862 Workload Type: xor 00:10:54.862 Source buffers: 3 00:10:54.862 Transfer size: 4096 bytes 00:10:54.862 Vector count 1 00:10:54.862 Module: software 00:10:54.862 Queue depth: 32 00:10:54.862 Allocate depth: 32 00:10:54.862 # threads/core: 1 00:10:54.862 Run time: 1 seconds 00:10:54.862 Verify: Yes 00:10:54.862 00:10:54.862 Running for 1 seconds... 00:10:54.862 00:10:54.862 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:54.862 ------------------------------------------------------------------------------------ 00:10:54.862 0,0 292448/s 1142 MiB/s 0 0 00:10:54.862 ==================================================================================== 00:10:54.862 Total 292448/s 1142 MiB/s 0 0' 00:10:54.862 06:03:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:54.862 06:03:25 -- accel/accel.sh@20 -- # IFS=: 00:10:54.862 06:03:25 -- accel/accel.sh@20 -- # read -r var val 00:10:54.862 06:03:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:54.862 06:03:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.862 06:03:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.862 06:03:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.862 06:03:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.862 06:03:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.862 06:03:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.862 06:03:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.862 06:03:25 -- accel/accel.sh@42 -- # jq -r . 00:10:54.862 [2024-06-11 06:03:25.468017] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:54.862 [2024-06-11 06:03:25.468247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107561 ] 00:10:55.131 [2024-06-11 06:03:25.649057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.390 [2024-06-11 06:03:25.929113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=0x1 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=xor 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=3 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=software 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@23 -- # accel_module=software 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=32 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=32 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=1 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val=Yes 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:55.649 06:03:26 -- accel/accel.sh@21 -- # val= 00:10:55.649 06:03:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # IFS=: 00:10:55.649 06:03:26 -- accel/accel.sh@20 -- # read -r var val 00:10:58.175 06:03:28 -- accel/accel.sh@21 -- # val= 00:10:58.175 06:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # IFS=: 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.175 06:03:28 -- accel/accel.sh@21 -- # val= 00:10:58.175 06:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # IFS=: 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.175 06:03:28 -- accel/accel.sh@21 -- # val= 00:10:58.175 06:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # IFS=: 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.175 06:03:28 -- accel/accel.sh@21 -- # val= 00:10:58.175 06:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # IFS=: 00:10:58.175 06:03:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.175 06:03:28 -- accel/accel.sh@21 -- # val= 00:10:58.176 06:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.176 06:03:28 -- accel/accel.sh@20 -- # IFS=: 00:10:58.176 06:03:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.176 06:03:28 -- accel/accel.sh@21 -- # val= 00:10:58.176 06:03:28 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.176 06:03:28 -- accel/accel.sh@20 -- # IFS=: 00:10:58.176 06:03:28 -- accel/accel.sh@20 -- # read -r var val 00:10:58.176 06:03:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:58.176 06:03:28 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:58.176 06:03:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:58.176 00:10:58.176 real 0m6.120s 00:10:58.176 user 0m5.381s 00:10:58.176 sys 0m0.576s 00:10:58.176 06:03:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.176 ************************************ 00:10:58.176 END TEST accel_xor 00:10:58.176 ************************************ 00:10:58.176 06:03:28 -- common/autotest_common.sh@10 -- # set +x 00:10:58.176 06:03:28 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:58.176 06:03:28 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:10:58.176 06:03:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.176 06:03:28 -- common/autotest_common.sh@10 -- # set +x 00:10:58.176 ************************************ 00:10:58.176 START TEST accel_dif_verify 00:10:58.176 ************************************ 00:10:58.176 06:03:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:10:58.176 06:03:28 -- accel/accel.sh@16 -- # local accel_opc 00:10:58.176 06:03:28 -- accel/accel.sh@17 -- # local accel_module 00:10:58.176 06:03:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:58.176 06:03:28 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.176 06:03:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:58.176 06:03:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.176 06:03:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.176 06:03:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.176 06:03:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.176 06:03:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.176 06:03:28 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.176 06:03:28 -- accel/accel.sh@42 -- # jq -r . 00:10:58.176 [2024-06-11 06:03:28.628297] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:10:58.176 [2024-06-11 06:03:28.628503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107618 ] 00:10:58.176 [2024-06-11 06:03:28.812121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.742 [2024-06-11 06:03:29.143804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.274 06:03:31 -- accel/accel.sh@18 -- # out=' 00:11:01.274 SPDK Configuration: 00:11:01.274 Core mask: 0x1 00:11:01.274 00:11:01.274 Accel Perf Configuration: 00:11:01.274 Workload Type: dif_verify 00:11:01.274 Vector size: 4096 bytes 00:11:01.274 Transfer size: 4096 bytes 00:11:01.274 Block size: 512 bytes 00:11:01.274 Metadata size: 8 bytes 00:11:01.274 Vector count 1 00:11:01.274 Module: software 00:11:01.274 Queue depth: 32 00:11:01.274 Allocate depth: 32 00:11:01.274 # threads/core: 1 00:11:01.274 Run time: 1 seconds 00:11:01.274 Verify: No 00:11:01.274 00:11:01.274 Running for 1 seconds... 00:11:01.274 00:11:01.274 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:01.274 ------------------------------------------------------------------------------------ 00:11:01.274 0,0 97120/s 385 MiB/s 0 0 00:11:01.274 ==================================================================================== 00:11:01.274 Total 97120/s 379 MiB/s 0 0' 00:11:01.274 06:03:31 -- accel/accel.sh@20 -- # IFS=: 00:11:01.274 06:03:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:01.274 06:03:31 -- accel/accel.sh@20 -- # read -r var val 00:11:01.274 06:03:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:01.274 06:03:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.274 06:03:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.274 06:03:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.274 06:03:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.274 06:03:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.274 06:03:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.274 06:03:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.274 06:03:31 -- accel/accel.sh@42 -- # jq -r . 00:11:01.274 [2024-06-11 06:03:31.757530] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:01.274 [2024-06-11 06:03:31.758548] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107667 ] 00:11:01.532 [2024-06-11 06:03:31.947448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.790 [2024-06-11 06:03:32.305944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=0x1 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=dif_verify 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=software 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@23 -- # accel_module=software 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=32 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=32 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=1 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val=No 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:02.048 06:03:32 -- accel/accel.sh@21 -- # val= 00:11:02.048 06:03:32 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # IFS=: 00:11:02.048 06:03:32 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@21 -- # val= 00:11:04.578 06:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # IFS=: 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@21 -- # val= 00:11:04.578 06:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # IFS=: 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@21 -- # val= 00:11:04.578 06:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # IFS=: 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@21 -- # val= 00:11:04.578 06:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # IFS=: 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@21 -- # val= 00:11:04.578 06:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # IFS=: 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@21 -- # val= 00:11:04.578 06:03:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # IFS=: 00:11:04.578 06:03:34 -- accel/accel.sh@20 -- # read -r var val 00:11:04.578 06:03:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:04.578 06:03:34 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:04.578 06:03:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:04.578 00:11:04.578 real 0m6.299s 00:11:04.578 user 0m5.512s 00:11:04.578 sys 0m0.602s 00:11:04.578 06:03:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.578 06:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.578 ************************************ 00:11:04.578 END TEST accel_dif_verify 00:11:04.578 ************************************ 00:11:04.578 06:03:34 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:04.578 06:03:34 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:04.578 06:03:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.578 06:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:04.578 ************************************ 00:11:04.578 START TEST accel_dif_generate 00:11:04.578 ************************************ 00:11:04.578 06:03:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:04.578 06:03:34 -- accel/accel.sh@16 -- # local accel_opc 00:11:04.578 06:03:34 -- accel/accel.sh@17 -- # local accel_module 00:11:04.578 06:03:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:04.578 06:03:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:04.578 06:03:34 -- accel/accel.sh@12 -- # build_accel_config 00:11:04.578 06:03:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:04.578 06:03:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.578 06:03:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.578 06:03:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:04.578 06:03:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:04.578 06:03:34 -- accel/accel.sh@41 -- # local IFS=, 00:11:04.578 06:03:34 -- accel/accel.sh@42 -- # jq -r . 00:11:04.578 [2024-06-11 06:03:34.999097] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:04.578 [2024-06-11 06:03:34.999359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107724 ] 00:11:04.578 [2024-06-11 06:03:35.191279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.145 [2024-06-11 06:03:35.524580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.712 06:03:38 -- accel/accel.sh@18 -- # out=' 00:11:07.712 SPDK Configuration: 00:11:07.712 Core mask: 0x1 00:11:07.712 00:11:07.712 Accel Perf Configuration: 00:11:07.712 Workload Type: dif_generate 00:11:07.712 Vector size: 4096 bytes 00:11:07.712 Transfer size: 4096 bytes 00:11:07.712 Block size: 512 bytes 00:11:07.712 Metadata size: 8 bytes 00:11:07.712 Vector count 1 00:11:07.712 Module: software 00:11:07.712 Queue depth: 32 00:11:07.712 Allocate depth: 32 00:11:07.712 # threads/core: 1 00:11:07.712 Run time: 1 seconds 00:11:07.712 Verify: No 00:11:07.712 00:11:07.712 Running for 1 seconds... 00:11:07.712 00:11:07.712 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:07.712 ------------------------------------------------------------------------------------ 00:11:07.712 0,0 118208/s 468 MiB/s 0 0 00:11:07.712 ==================================================================================== 00:11:07.712 Total 118208/s 461 MiB/s 0 0' 00:11:07.712 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:07.712 06:03:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:07.712 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:07.712 06:03:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:07.712 06:03:38 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.712 06:03:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.712 06:03:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.712 06:03:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.712 06:03:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.712 06:03:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.712 06:03:38 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.712 06:03:38 -- accel/accel.sh@42 -- # jq -r . 00:11:07.712 [2024-06-11 06:03:38.076947] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:07.712 [2024-06-11 06:03:38.077153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107766 ] 00:11:07.712 [2024-06-11 06:03:38.261795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.971 [2024-06-11 06:03:38.534282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=0x1 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=dif_generate 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=software 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=32 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=32 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=1 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val=No 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:08.229 06:03:38 -- accel/accel.sh@21 -- # val= 00:11:08.229 06:03:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # IFS=: 00:11:08.229 06:03:38 -- accel/accel.sh@20 -- # read -r var val 00:11:10.778 06:03:40 -- accel/accel.sh@21 -- # val= 00:11:10.778 06:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.778 06:03:40 -- accel/accel.sh@20 -- # IFS=: 00:11:10.778 06:03:40 -- accel/accel.sh@20 -- # read -r var val 00:11:10.778 06:03:40 -- accel/accel.sh@21 -- # val= 00:11:10.778 06:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.778 06:03:40 -- accel/accel.sh@20 -- # IFS=: 00:11:10.778 06:03:40 -- accel/accel.sh@20 -- # read -r var val 00:11:10.778 06:03:40 -- accel/accel.sh@21 -- # val= 00:11:10.778 06:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # IFS=: 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # read -r var val 00:11:10.779 06:03:40 -- accel/accel.sh@21 -- # val= 00:11:10.779 06:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # IFS=: 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # read -r var val 00:11:10.779 06:03:40 -- accel/accel.sh@21 -- # val= 00:11:10.779 06:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # IFS=: 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # read -r var val 00:11:10.779 06:03:40 -- accel/accel.sh@21 -- # val= 00:11:10.779 06:03:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # IFS=: 00:11:10.779 06:03:40 -- accel/accel.sh@20 -- # read -r var val 00:11:10.779 06:03:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.779 06:03:40 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:10.779 06:03:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.779 00:11:10.779 real 0m5.945s 00:11:10.779 user 0m5.223s 00:11:10.779 sys 0m0.551s 00:11:10.779 06:03:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.779 06:03:40 -- common/autotest_common.sh@10 -- # set +x 00:11:10.779 ************************************ 00:11:10.779 END TEST accel_dif_generate 00:11:10.779 ************************************ 00:11:10.779 06:03:40 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:10.779 06:03:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:10.779 06:03:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.779 06:03:40 -- common/autotest_common.sh@10 -- # set +x 00:11:10.779 ************************************ 00:11:10.779 START TEST accel_dif_generate_copy 00:11:10.779 ************************************ 00:11:10.779 06:03:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:10.779 06:03:40 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.779 06:03:40 -- accel/accel.sh@17 -- # local accel_module 00:11:10.779 06:03:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:10.779 06:03:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:10.779 06:03:40 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.779 06:03:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.779 06:03:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.779 06:03:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.779 06:03:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.779 06:03:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.779 06:03:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.779 06:03:40 -- accel/accel.sh@42 -- # jq -r . 00:11:10.779 [2024-06-11 06:03:41.002924] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:10.779 [2024-06-11 06:03:41.003594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107825 ] 00:11:10.779 [2024-06-11 06:03:41.182565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.037 [2024-06-11 06:03:41.441779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.567 06:03:43 -- accel/accel.sh@18 -- # out=' 00:11:13.567 SPDK Configuration: 00:11:13.567 Core mask: 0x1 00:11:13.567 00:11:13.567 Accel Perf Configuration: 00:11:13.567 Workload Type: dif_generate_copy 00:11:13.567 Vector size: 4096 bytes 00:11:13.567 Transfer size: 4096 bytes 00:11:13.567 Vector count 1 00:11:13.567 Module: software 00:11:13.567 Queue depth: 32 00:11:13.567 Allocate depth: 32 00:11:13.567 # threads/core: 1 00:11:13.567 Run time: 1 seconds 00:11:13.567 Verify: No 00:11:13.567 00:11:13.567 Running for 1 seconds... 00:11:13.567 00:11:13.567 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:13.567 ------------------------------------------------------------------------------------ 00:11:13.567 0,0 109984/s 436 MiB/s 0 0 00:11:13.567 ==================================================================================== 00:11:13.567 Total 109984/s 429 MiB/s 0 0' 00:11:13.567 06:03:43 -- accel/accel.sh@20 -- # IFS=: 00:11:13.567 06:03:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:13.567 06:03:43 -- accel/accel.sh@20 -- # read -r var val 00:11:13.567 06:03:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:13.567 06:03:43 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.567 06:03:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.567 06:03:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.567 06:03:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.567 06:03:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.567 06:03:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.567 06:03:43 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.567 06:03:43 -- accel/accel.sh@42 -- # jq -r . 00:11:13.567 [2024-06-11 06:03:43.842282] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:13.567 [2024-06-11 06:03:43.842505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107872 ] 00:11:13.567 [2024-06-11 06:03:44.025047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.826 [2024-06-11 06:03:44.271792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=0x1 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=software 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@23 -- # accel_module=software 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=32 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=32 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=1 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val=No 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:14.085 06:03:44 -- accel/accel.sh@21 -- # val= 00:11:14.085 06:03:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # IFS=: 00:11:14.085 06:03:44 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@21 -- # val= 00:11:16.615 06:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # IFS=: 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@21 -- # val= 00:11:16.615 06:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # IFS=: 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@21 -- # val= 00:11:16.615 06:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # IFS=: 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@21 -- # val= 00:11:16.615 06:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # IFS=: 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@21 -- # val= 00:11:16.615 06:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # IFS=: 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@21 -- # val= 00:11:16.615 06:03:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # IFS=: 00:11:16.615 06:03:46 -- accel/accel.sh@20 -- # read -r var val 00:11:16.615 06:03:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:16.615 06:03:46 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:16.615 06:03:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:16.615 00:11:16.615 real 0m5.703s 00:11:16.615 user 0m5.016s 00:11:16.615 sys 0m0.511s 00:11:16.615 06:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.615 06:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.615 ************************************ 00:11:16.615 END TEST accel_dif_generate_copy 00:11:16.615 ************************************ 00:11:16.615 06:03:46 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:16.615 06:03:46 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:16.615 06:03:46 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:16.615 06:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.615 06:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:16.615 ************************************ 00:11:16.615 START TEST accel_comp 00:11:16.615 ************************************ 00:11:16.615 06:03:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:16.615 06:03:46 -- accel/accel.sh@16 -- # local accel_opc 00:11:16.615 06:03:46 -- accel/accel.sh@17 -- # local accel_module 00:11:16.615 06:03:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:16.615 06:03:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:16.615 06:03:46 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.615 06:03:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:16.615 06:03:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.615 06:03:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.615 06:03:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:16.615 06:03:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:16.615 06:03:46 -- accel/accel.sh@41 -- # local IFS=, 00:11:16.615 06:03:46 -- accel/accel.sh@42 -- # jq -r . 00:11:16.615 [2024-06-11 06:03:46.770683] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:16.615 [2024-06-11 06:03:46.770879] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107924 ] 00:11:16.615 [2024-06-11 06:03:46.949452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.615 [2024-06-11 06:03:47.203572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.146 06:03:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:19.146 00:11:19.146 SPDK Configuration: 00:11:19.146 Core mask: 0x1 00:11:19.146 00:11:19.146 Accel Perf Configuration: 00:11:19.146 Workload Type: compress 00:11:19.146 Transfer size: 4096 bytes 00:11:19.146 Vector count 1 00:11:19.146 Module: software 00:11:19.146 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.146 Queue depth: 32 00:11:19.146 Allocate depth: 32 00:11:19.146 # threads/core: 1 00:11:19.146 Run time: 1 seconds 00:11:19.146 Verify: No 00:11:19.146 00:11:19.146 Running for 1 seconds... 00:11:19.146 00:11:19.146 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:19.146 ------------------------------------------------------------------------------------ 00:11:19.146 0,0 58528/s 243 MiB/s 0 0 00:11:19.146 ==================================================================================== 00:11:19.146 Total 58528/s 228 MiB/s 0 0' 00:11:19.146 06:03:49 -- accel/accel.sh@20 -- # IFS=: 00:11:19.146 06:03:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.146 06:03:49 -- accel/accel.sh@20 -- # read -r var val 00:11:19.146 06:03:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.146 06:03:49 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.146 06:03:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.146 06:03:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.146 06:03:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.146 06:03:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.146 06:03:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.146 06:03:49 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.146 06:03:49 -- accel/accel.sh@42 -- # jq -r . 00:11:19.146 [2024-06-11 06:03:49.608383] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:19.146 [2024-06-11 06:03:49.608583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107964 ] 00:11:19.146 [2024-06-11 06:03:49.789186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.711 [2024-06-11 06:03:50.051435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.711 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.711 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.711 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.711 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.711 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.711 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.711 06:03:50 -- accel/accel.sh@21 -- # val=0x1 00:11:19.711 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.711 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.711 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.711 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.711 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.711 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=compress 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=software 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@23 -- # accel_module=software 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=32 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=32 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=1 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val=No 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:19.712 06:03:50 -- accel/accel.sh@21 -- # val= 00:11:19.712 06:03:50 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # IFS=: 00:11:19.712 06:03:50 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@21 -- # val= 00:11:22.237 06:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # IFS=: 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@21 -- # val= 00:11:22.237 06:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # IFS=: 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@21 -- # val= 00:11:22.237 06:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # IFS=: 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@21 -- # val= 00:11:22.237 06:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # IFS=: 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@21 -- # val= 00:11:22.237 06:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # IFS=: 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@21 -- # val= 00:11:22.237 06:03:52 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # IFS=: 00:11:22.237 06:03:52 -- accel/accel.sh@20 -- # read -r var val 00:11:22.237 06:03:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:22.237 06:03:52 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:22.237 06:03:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:22.237 00:11:22.237 real 0m5.683s 00:11:22.237 user 0m5.005s 00:11:22.237 sys 0m0.515s 00:11:22.237 06:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.237 06:03:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.237 ************************************ 00:11:22.237 END TEST accel_comp 00:11:22.237 ************************************ 00:11:22.237 06:03:52 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:22.237 06:03:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:22.237 06:03:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.237 06:03:52 -- common/autotest_common.sh@10 -- # set +x 00:11:22.237 ************************************ 00:11:22.237 START TEST accel_decomp 00:11:22.237 ************************************ 00:11:22.237 06:03:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:22.237 06:03:52 -- accel/accel.sh@16 -- # local accel_opc 00:11:22.237 06:03:52 -- accel/accel.sh@17 -- # local accel_module 00:11:22.237 06:03:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:22.237 06:03:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:22.237 06:03:52 -- accel/accel.sh@12 -- # build_accel_config 00:11:22.237 06:03:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:22.237 06:03:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.237 06:03:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.237 06:03:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:22.237 06:03:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:22.237 06:03:52 -- accel/accel.sh@41 -- # local IFS=, 00:11:22.237 06:03:52 -- accel/accel.sh@42 -- # jq -r . 00:11:22.237 [2024-06-11 06:03:52.526912] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:22.237 [2024-06-11 06:03:52.527109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108018 ] 00:11:22.237 [2024-06-11 06:03:52.710247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.495 [2024-06-11 06:03:52.946543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.022 06:03:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:25.022 00:11:25.022 SPDK Configuration: 00:11:25.022 Core mask: 0x1 00:11:25.022 00:11:25.022 Accel Perf Configuration: 00:11:25.022 Workload Type: decompress 00:11:25.022 Transfer size: 4096 bytes 00:11:25.022 Vector count 1 00:11:25.022 Module: software 00:11:25.022 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:25.022 Queue depth: 32 00:11:25.022 Allocate depth: 32 00:11:25.022 # threads/core: 1 00:11:25.022 Run time: 1 seconds 00:11:25.022 Verify: Yes 00:11:25.022 00:11:25.022 Running for 1 seconds... 00:11:25.022 00:11:25.022 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:25.022 ------------------------------------------------------------------------------------ 00:11:25.022 0,0 63872/s 117 MiB/s 0 0 00:11:25.022 ==================================================================================== 00:11:25.022 Total 63872/s 249 MiB/s 0 0' 00:11:25.022 06:03:55 -- accel/accel.sh@20 -- # IFS=: 00:11:25.022 06:03:55 -- accel/accel.sh@20 -- # read -r var val 00:11:25.022 06:03:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:25.022 06:03:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:25.022 06:03:55 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.022 06:03:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:25.022 06:03:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.022 06:03:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.022 06:03:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:25.022 06:03:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:25.022 06:03:55 -- accel/accel.sh@41 -- # local IFS=, 00:11:25.022 06:03:55 -- accel/accel.sh@42 -- # jq -r . 00:11:25.022 [2024-06-11 06:03:55.353772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:25.022 [2024-06-11 06:03:55.354638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108065 ] 00:11:25.022 [2024-06-11 06:03:55.537550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.280 [2024-06-11 06:03:55.814152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=0x1 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=decompress 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=software 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@23 -- # accel_module=software 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=32 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=32 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=1 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val=Yes 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:25.538 06:03:56 -- accel/accel.sh@21 -- # val= 00:11:25.538 06:03:56 -- accel/accel.sh@22 -- # case "$var" in 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # IFS=: 00:11:25.538 06:03:56 -- accel/accel.sh@20 -- # read -r var val 00:11:28.066 06:03:58 -- accel/accel.sh@21 -- # val= 00:11:28.067 06:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # IFS=: 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # read -r var val 00:11:28.067 06:03:58 -- accel/accel.sh@21 -- # val= 00:11:28.067 06:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # IFS=: 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # read -r var val 00:11:28.067 06:03:58 -- accel/accel.sh@21 -- # val= 00:11:28.067 06:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # IFS=: 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # read -r var val 00:11:28.067 06:03:58 -- accel/accel.sh@21 -- # val= 00:11:28.067 06:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # IFS=: 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # read -r var val 00:11:28.067 06:03:58 -- accel/accel.sh@21 -- # val= 00:11:28.067 06:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # IFS=: 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # read -r var val 00:11:28.067 06:03:58 -- accel/accel.sh@21 -- # val= 00:11:28.067 06:03:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # IFS=: 00:11:28.067 06:03:58 -- accel/accel.sh@20 -- # read -r var val 00:11:28.067 06:03:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:28.067 06:03:58 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:28.067 06:03:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:28.067 00:11:28.067 real 0m5.714s 00:11:28.067 user 0m4.961s 00:11:28.067 sys 0m0.577s 00:11:28.067 ************************************ 00:11:28.067 06:03:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.067 06:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.067 END TEST accel_decomp 00:11:28.067 ************************************ 00:11:28.067 06:03:58 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.067 06:03:58 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:28.067 06:03:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.067 06:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:28.067 ************************************ 00:11:28.067 START TEST accel_decmop_full 00:11:28.067 ************************************ 00:11:28.067 06:03:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.067 06:03:58 -- accel/accel.sh@16 -- # local accel_opc 00:11:28.067 06:03:58 -- accel/accel.sh@17 -- # local accel_module 00:11:28.067 06:03:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.067 06:03:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:28.067 06:03:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:28.067 06:03:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.067 06:03:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.067 06:03:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.067 06:03:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.067 06:03:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.067 06:03:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.067 06:03:58 -- accel/accel.sh@42 -- # jq -r . 00:11:28.067 [2024-06-11 06:03:58.293987] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:28.067 [2024-06-11 06:03:58.294206] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108122 ] 00:11:28.067 [2024-06-11 06:03:58.478219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.325 [2024-06-11 06:03:58.718786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.854 06:04:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:30.854 00:11:30.854 SPDK Configuration: 00:11:30.854 Core mask: 0x1 00:11:30.854 00:11:30.854 Accel Perf Configuration: 00:11:30.854 Workload Type: decompress 00:11:30.854 Transfer size: 111250 bytes 00:11:30.854 Vector count 1 00:11:30.854 Module: software 00:11:30.854 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:30.854 Queue depth: 32 00:11:30.854 Allocate depth: 32 00:11:30.854 # threads/core: 1 00:11:30.854 Run time: 1 seconds 00:11:30.854 Verify: Yes 00:11:30.854 00:11:30.854 Running for 1 seconds... 00:11:30.854 00:11:30.854 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:30.854 ------------------------------------------------------------------------------------ 00:11:30.854 0,0 4576/s 189 MiB/s 0 0 00:11:30.854 ==================================================================================== 00:11:30.854 Total 4576/s 485 MiB/s 0 0' 00:11:30.854 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:30.854 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:30.854 06:04:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:30.854 06:04:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:30.854 06:04:01 -- accel/accel.sh@12 -- # build_accel_config 00:11:30.854 06:04:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:30.854 06:04:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.854 06:04:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.854 06:04:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:30.854 06:04:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:30.854 06:04:01 -- accel/accel.sh@41 -- # local IFS=, 00:11:30.854 06:04:01 -- accel/accel.sh@42 -- # jq -r . 00:11:30.854 [2024-06-11 06:04:01.143567] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:30.854 [2024-06-11 06:04:01.143784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108166 ] 00:11:30.854 [2024-06-11 06:04:01.324208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.112 [2024-06-11 06:04:01.600214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val=0x1 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.369 06:04:01 -- accel/accel.sh@21 -- # val=decompress 00:11:31.369 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.369 06:04:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:31.369 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val=software 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val=32 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val=32 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val=1 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val=Yes 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:31.370 06:04:01 -- accel/accel.sh@21 -- # val= 00:11:31.370 06:04:01 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # IFS=: 00:11:31.370 06:04:01 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@21 -- # val= 00:11:33.922 06:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # IFS=: 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@21 -- # val= 00:11:33.922 06:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # IFS=: 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@21 -- # val= 00:11:33.922 06:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # IFS=: 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@21 -- # val= 00:11:33.922 06:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # IFS=: 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@21 -- # val= 00:11:33.922 06:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # IFS=: 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@21 -- # val= 00:11:33.922 06:04:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # IFS=: 00:11:33.922 06:04:03 -- accel/accel.sh@20 -- # read -r var val 00:11:33.922 06:04:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:33.922 06:04:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:33.922 06:04:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:33.922 00:11:33.922 real 0m5.744s 00:11:33.922 user 0m5.057s 00:11:33.922 sys 0m0.526s 00:11:33.922 06:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.922 06:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:33.922 ************************************ 00:11:33.922 END TEST accel_decmop_full 00:11:33.922 ************************************ 00:11:33.922 06:04:04 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.922 06:04:04 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:33.922 06:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:33.922 06:04:04 -- common/autotest_common.sh@10 -- # set +x 00:11:33.922 ************************************ 00:11:33.922 START TEST accel_decomp_mcore 00:11:33.922 ************************************ 00:11:33.922 06:04:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.922 06:04:04 -- accel/accel.sh@16 -- # local accel_opc 00:11:33.922 06:04:04 -- accel/accel.sh@17 -- # local accel_module 00:11:33.922 06:04:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.922 06:04:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:33.922 06:04:04 -- accel/accel.sh@12 -- # build_accel_config 00:11:33.922 06:04:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:33.922 06:04:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.922 06:04:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.922 06:04:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:33.922 06:04:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:33.922 06:04:04 -- accel/accel.sh@41 -- # local IFS=, 00:11:33.922 06:04:04 -- accel/accel.sh@42 -- # jq -r . 00:11:33.922 [2024-06-11 06:04:04.097375] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:33.922 [2024-06-11 06:04:04.097569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108223 ] 00:11:33.922 [2024-06-11 06:04:04.297371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.922 [2024-06-11 06:04:04.534452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.922 [2024-06-11 06:04:04.534633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.922 [2024-06-11 06:04:04.534815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.922 [2024-06-11 06:04:04.534915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.453 06:04:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:36.453 00:11:36.453 SPDK Configuration: 00:11:36.453 Core mask: 0xf 00:11:36.453 00:11:36.453 Accel Perf Configuration: 00:11:36.453 Workload Type: decompress 00:11:36.453 Transfer size: 4096 bytes 00:11:36.453 Vector count 1 00:11:36.453 Module: software 00:11:36.453 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.453 Queue depth: 32 00:11:36.453 Allocate depth: 32 00:11:36.453 # threads/core: 1 00:11:36.453 Run time: 1 seconds 00:11:36.453 Verify: Yes 00:11:36.453 00:11:36.453 Running for 1 seconds... 00:11:36.453 00:11:36.453 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.453 ------------------------------------------------------------------------------------ 00:11:36.453 0,0 55392/s 102 MiB/s 0 0 00:11:36.453 3,0 59104/s 108 MiB/s 0 0 00:11:36.453 2,0 52160/s 96 MiB/s 0 0 00:11:36.453 1,0 58560/s 107 MiB/s 0 0 00:11:36.453 ==================================================================================== 00:11:36.453 Total 225216/s 879 MiB/s 0 0' 00:11:36.453 06:04:06 -- accel/accel.sh@20 -- # IFS=: 00:11:36.453 06:04:06 -- accel/accel.sh@20 -- # read -r var val 00:11:36.453 06:04:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:36.453 06:04:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:36.453 06:04:06 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.453 06:04:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.453 06:04:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.453 06:04:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.453 06:04:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.453 06:04:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.453 06:04:06 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.453 06:04:06 -- accel/accel.sh@42 -- # jq -r . 00:11:36.453 [2024-06-11 06:04:06.981829] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:36.453 [2024-06-11 06:04:06.982048] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108261 ] 00:11:36.712 [2024-06-11 06:04:07.183546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.970 [2024-06-11 06:04:07.440197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.970 [2024-06-11 06:04:07.440372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.970 [2024-06-11 06:04:07.441323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.970 [2024-06-11 06:04:07.441325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val=0xf 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val=decompress 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.227 06:04:07 -- accel/accel.sh@21 -- # val=software 00:11:37.227 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.227 06:04:07 -- accel/accel.sh@23 -- # accel_module=software 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.227 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val=32 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val=32 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val=1 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val=Yes 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:37.228 06:04:07 -- accel/accel.sh@21 -- # val= 00:11:37.228 06:04:07 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # IFS=: 00:11:37.228 06:04:07 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@21 -- # val= 00:11:39.798 06:04:09 -- accel/accel.sh@22 -- # case "$var" in 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # IFS=: 00:11:39.798 06:04:09 -- accel/accel.sh@20 -- # read -r var val 00:11:39.798 06:04:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:39.798 06:04:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:39.798 ************************************ 00:11:39.798 END TEST accel_decomp_mcore 00:11:39.798 ************************************ 00:11:39.798 06:04:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:39.798 00:11:39.798 real 0m5.828s 00:11:39.798 user 0m16.404s 00:11:39.798 sys 0m0.626s 00:11:39.798 06:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.798 06:04:09 -- common/autotest_common.sh@10 -- # set +x 00:11:39.798 06:04:09 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:39.798 06:04:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:39.798 06:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:39.798 06:04:09 -- common/autotest_common.sh@10 -- # set +x 00:11:39.798 ************************************ 00:11:39.798 START TEST accel_decomp_full_mcore 00:11:39.798 ************************************ 00:11:39.798 06:04:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:39.798 06:04:09 -- accel/accel.sh@16 -- # local accel_opc 00:11:39.798 06:04:09 -- accel/accel.sh@17 -- # local accel_module 00:11:39.798 06:04:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:39.798 06:04:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:39.798 06:04:09 -- accel/accel.sh@12 -- # build_accel_config 00:11:39.798 06:04:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:39.798 06:04:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:39.798 06:04:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:39.798 06:04:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:39.798 06:04:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:39.798 06:04:09 -- accel/accel.sh@41 -- # local IFS=, 00:11:39.798 06:04:09 -- accel/accel.sh@42 -- # jq -r . 00:11:39.798 [2024-06-11 06:04:09.998820] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:39.798 [2024-06-11 06:04:09.999138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108321 ] 00:11:39.798 [2024-06-11 06:04:10.214614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.056 [2024-06-11 06:04:10.478388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.056 [2024-06-11 06:04:10.478614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.056 [2024-06-11 06:04:10.479615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.056 [2024-06-11 06:04:10.479615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.586 06:04:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:42.586 00:11:42.586 SPDK Configuration: 00:11:42.586 Core mask: 0xf 00:11:42.586 00:11:42.586 Accel Perf Configuration: 00:11:42.586 Workload Type: decompress 00:11:42.586 Transfer size: 111250 bytes 00:11:42.586 Vector count 1 00:11:42.586 Module: software 00:11:42.586 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:42.586 Queue depth: 32 00:11:42.586 Allocate depth: 32 00:11:42.586 # threads/core: 1 00:11:42.586 Run time: 1 seconds 00:11:42.586 Verify: Yes 00:11:42.586 00:11:42.586 Running for 1 seconds... 00:11:42.586 00:11:42.586 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:42.586 ------------------------------------------------------------------------------------ 00:11:42.586 0,0 3872/s 159 MiB/s 0 0 00:11:42.586 3,0 4448/s 183 MiB/s 0 0 00:11:42.586 2,0 4448/s 183 MiB/s 0 0 00:11:42.586 1,0 4448/s 183 MiB/s 0 0 00:11:42.586 ==================================================================================== 00:11:42.586 Total 17216/s 1826 MiB/s 0 0' 00:11:42.587 06:04:12 -- accel/accel.sh@20 -- # IFS=: 00:11:42.587 06:04:12 -- accel/accel.sh@20 -- # read -r var val 00:11:42.587 06:04:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:42.587 06:04:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:42.587 06:04:12 -- accel/accel.sh@12 -- # build_accel_config 00:11:42.587 06:04:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:42.587 06:04:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.587 06:04:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.587 06:04:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:42.587 06:04:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:42.587 06:04:12 -- accel/accel.sh@41 -- # local IFS=, 00:11:42.587 06:04:12 -- accel/accel.sh@42 -- # jq -r . 00:11:42.587 [2024-06-11 06:04:13.006078] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:42.587 [2024-06-11 06:04:13.006298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108373 ] 00:11:42.587 [2024-06-11 06:04:13.188718] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.845 [2024-06-11 06:04:13.460998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.845 [2024-06-11 06:04:13.461188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.845 [2024-06-11 06:04:13.462103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.845 [2024-06-11 06:04:13.462104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=0xf 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=decompress 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=software 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@23 -- # accel_module=software 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=32 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=32 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=1 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val=Yes 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:43.104 06:04:13 -- accel/accel.sh@21 -- # val= 00:11:43.104 06:04:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # IFS=: 00:11:43.104 06:04:13 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@21 -- # val= 00:11:45.661 06:04:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # IFS=: 00:11:45.661 06:04:15 -- accel/accel.sh@20 -- # read -r var val 00:11:45.661 06:04:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:45.661 06:04:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:45.661 06:04:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:45.661 00:11:45.661 real 0m5.988s 00:11:45.661 user 0m16.985s 00:11:45.661 sys 0m0.589s 00:11:45.661 06:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.661 ************************************ 00:11:45.661 END TEST accel_decomp_full_mcore 00:11:45.661 ************************************ 00:11:45.661 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:11:45.661 06:04:15 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:45.661 06:04:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:45.661 06:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:45.661 06:04:15 -- common/autotest_common.sh@10 -- # set +x 00:11:45.661 ************************************ 00:11:45.661 START TEST accel_decomp_mthread 00:11:45.661 ************************************ 00:11:45.661 06:04:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:45.661 06:04:15 -- accel/accel.sh@16 -- # local accel_opc 00:11:45.661 06:04:15 -- accel/accel.sh@17 -- # local accel_module 00:11:45.661 06:04:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:45.662 06:04:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:45.662 06:04:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:45.662 06:04:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:45.662 06:04:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.662 06:04:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.662 06:04:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:45.662 06:04:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:45.662 06:04:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:45.662 06:04:15 -- accel/accel.sh@42 -- # jq -r . 00:11:45.662 [2024-06-11 06:04:16.026919] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:45.662 [2024-06-11 06:04:16.027238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108434 ] 00:11:45.662 [2024-06-11 06:04:16.189552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.921 [2024-06-11 06:04:16.442580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.463 06:04:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:48.463 00:11:48.463 SPDK Configuration: 00:11:48.463 Core mask: 0x1 00:11:48.463 00:11:48.463 Accel Perf Configuration: 00:11:48.463 Workload Type: decompress 00:11:48.463 Transfer size: 4096 bytes 00:11:48.463 Vector count 1 00:11:48.463 Module: software 00:11:48.463 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.463 Queue depth: 32 00:11:48.463 Allocate depth: 32 00:11:48.463 # threads/core: 2 00:11:48.463 Run time: 1 seconds 00:11:48.463 Verify: Yes 00:11:48.463 00:11:48.463 Running for 1 seconds... 00:11:48.463 00:11:48.463 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:48.463 ------------------------------------------------------------------------------------ 00:11:48.463 0,1 32320/s 59 MiB/s 0 0 00:11:48.463 0,0 32192/s 59 MiB/s 0 0 00:11:48.463 ==================================================================================== 00:11:48.463 Total 64512/s 252 MiB/s 0 0' 00:11:48.463 06:04:18 -- accel/accel.sh@20 -- # IFS=: 00:11:48.463 06:04:18 -- accel/accel.sh@20 -- # read -r var val 00:11:48.463 06:04:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:48.463 06:04:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:48.463 06:04:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.463 06:04:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.463 06:04:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.463 06:04:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.463 06:04:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.463 06:04:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.463 06:04:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.463 06:04:18 -- accel/accel.sh@42 -- # jq -r . 00:11:48.463 [2024-06-11 06:04:18.848716] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:48.463 [2024-06-11 06:04:18.848975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108475 ] 00:11:48.463 [2024-06-11 06:04:19.011997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.723 [2024-06-11 06:04:19.270306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=0x1 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=decompress 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=software 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@23 -- # accel_module=software 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=32 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=32 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=2 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val=Yes 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:48.982 06:04:19 -- accel/accel.sh@21 -- # val= 00:11:48.982 06:04:19 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.982 06:04:19 -- accel/accel.sh@20 -- # IFS=: 00:11:48.983 06:04:19 -- accel/accel.sh@20 -- # read -r var val 00:11:51.532 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.532 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.532 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.533 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.533 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.533 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.533 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.533 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 06:04:21 -- accel/accel.sh@21 -- # val= 00:11:51.533 06:04:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # IFS=: 00:11:51.533 06:04:21 -- accel/accel.sh@20 -- # read -r var val 00:11:51.533 ************************************ 00:11:51.533 END TEST accel_decomp_mthread 00:11:51.533 ************************************ 00:11:51.533 06:04:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:51.533 06:04:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:51.533 06:04:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:51.533 00:11:51.533 real 0m5.669s 00:11:51.533 user 0m4.970s 00:11:51.533 sys 0m0.535s 00:11:51.533 06:04:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.533 06:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:51.533 06:04:21 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:51.533 06:04:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:51.533 06:04:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:51.533 06:04:21 -- common/autotest_common.sh@10 -- # set +x 00:11:51.533 ************************************ 00:11:51.533 START TEST accel_deomp_full_mthread 00:11:51.533 ************************************ 00:11:51.533 06:04:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:51.533 06:04:21 -- accel/accel.sh@16 -- # local accel_opc 00:11:51.533 06:04:21 -- accel/accel.sh@17 -- # local accel_module 00:11:51.533 06:04:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:51.533 06:04:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:51.533 06:04:21 -- accel/accel.sh@12 -- # build_accel_config 00:11:51.533 06:04:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:51.533 06:04:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:51.533 06:04:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:51.533 06:04:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:51.533 06:04:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:51.533 06:04:21 -- accel/accel.sh@41 -- # local IFS=, 00:11:51.533 06:04:21 -- accel/accel.sh@42 -- # jq -r . 00:11:51.533 [2024-06-11 06:04:21.770230] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:51.533 [2024-06-11 06:04:21.770497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108534 ] 00:11:51.533 [2024-06-11 06:04:21.931928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.533 [2024-06-11 06:04:22.176133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.067 06:04:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:54.067 00:11:54.067 SPDK Configuration: 00:11:54.067 Core mask: 0x1 00:11:54.067 00:11:54.067 Accel Perf Configuration: 00:11:54.067 Workload Type: decompress 00:11:54.067 Transfer size: 111250 bytes 00:11:54.067 Vector count 1 00:11:54.067 Module: software 00:11:54.067 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.067 Queue depth: 32 00:11:54.067 Allocate depth: 32 00:11:54.067 # threads/core: 2 00:11:54.067 Run time: 1 seconds 00:11:54.067 Verify: Yes 00:11:54.067 00:11:54.067 Running for 1 seconds... 00:11:54.067 00:11:54.067 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:54.067 ------------------------------------------------------------------------------------ 00:11:54.067 0,1 2368/s 97 MiB/s 0 0 00:11:54.067 0,0 2336/s 96 MiB/s 0 0 00:11:54.067 ==================================================================================== 00:11:54.067 Total 4704/s 499 MiB/s 0 0' 00:11:54.067 06:04:24 -- accel/accel.sh@20 -- # IFS=: 00:11:54.067 06:04:24 -- accel/accel.sh@20 -- # read -r var val 00:11:54.067 06:04:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:54.067 06:04:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:54.067 06:04:24 -- accel/accel.sh@12 -- # build_accel_config 00:11:54.067 06:04:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:54.067 06:04:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.067 06:04:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.067 06:04:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:54.067 06:04:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:54.067 06:04:24 -- accel/accel.sh@41 -- # local IFS=, 00:11:54.067 06:04:24 -- accel/accel.sh@42 -- # jq -r . 00:11:54.067 [2024-06-11 06:04:24.600953] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:54.067 [2024-06-11 06:04:24.601252] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108577 ] 00:11:54.327 [2024-06-11 06:04:24.764707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.585 [2024-06-11 06:04:25.035842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=0x1 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=decompress 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=software 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@23 -- # accel_module=software 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=32 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=32 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=2 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val=Yes 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:54.844 06:04:25 -- accel/accel.sh@21 -- # val= 00:11:54.844 06:04:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # IFS=: 00:11:54.844 06:04:25 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 06:04:27 -- accel/accel.sh@21 -- # val= 00:11:57.380 06:04:27 -- accel/accel.sh@22 -- # case "$var" in 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # IFS=: 00:11:57.380 06:04:27 -- accel/accel.sh@20 -- # read -r var val 00:11:57.380 ************************************ 00:11:57.380 END TEST accel_deomp_full_mthread 00:11:57.380 ************************************ 00:11:57.380 06:04:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:57.380 06:04:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:57.380 06:04:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.380 00:11:57.380 real 0m5.724s 00:11:57.380 user 0m5.072s 00:11:57.380 sys 0m0.490s 00:11:57.380 06:04:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.380 06:04:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.380 06:04:27 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:57.380 06:04:27 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:57.380 06:04:27 -- accel/accel.sh@129 -- # build_accel_config 00:11:57.380 06:04:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:57.380 06:04:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:57.380 06:04:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.380 06:04:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.380 06:04:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.380 06:04:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:57.380 06:04:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:57.380 06:04:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:57.380 06:04:27 -- accel/accel.sh@42 -- # jq -r . 00:11:57.380 06:04:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.380 ************************************ 00:11:57.380 START TEST accel_dif_functional_tests 00:11:57.380 ************************************ 00:11:57.380 06:04:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:57.380 [2024-06-11 06:04:27.621283] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:11:57.380 [2024-06-11 06:04:27.621734] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108628 ] 00:11:57.380 [2024-06-11 06:04:27.813388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.640 [2024-06-11 06:04:28.079279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.640 [2024-06-11 06:04:28.079393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.640 [2024-06-11 06:04:28.079394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.899 00:11:57.899 00:11:57.899 CUnit - A unit testing framework for C - Version 2.1-3 00:11:57.899 http://cunit.sourceforge.net/ 00:11:57.899 00:11:57.899 00:11:57.899 Suite: accel_dif 00:11:57.899 Test: verify: DIF generated, GUARD check ...passed 00:11:57.899 Test: verify: DIF generated, APPTAG check ...passed 00:11:57.899 Test: verify: DIF generated, REFTAG check ...passed 00:11:57.899 Test: verify: DIF not generated, GUARD check ...[2024-06-11 06:04:28.531464] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:57.899 [2024-06-11 06:04:28.531831] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:57.899 passed 00:11:57.900 Test: verify: DIF not generated, APPTAG check ...[2024-06-11 06:04:28.532075] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:57.900 [2024-06-11 06:04:28.532221] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:57.900 passed 00:11:57.900 Test: verify: DIF not generated, REFTAG check ...[2024-06-11 06:04:28.532467] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:57.900 [2024-06-11 06:04:28.532572] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:57.900 passed 00:11:57.900 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:57.900 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-11 06:04:28.532892] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:57.900 passed 00:11:57.900 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:57.900 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:57.900 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:57.900 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:11:57.900 Test: generate copy: DIF generated, GUARD check ...[2024-06-11 06:04:28.533543] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:57.900 passed 00:11:57.900 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:57.900 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:57.900 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:57.900 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:57.900 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:57.900 Test: generate copy: iovecs-len validate ...[2024-06-11 06:04:28.534487] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:57.900 passed 00:11:57.900 Test: generate copy: buffer alignment validate ...passed 00:11:57.900 00:11:57.900 Run Summary: Type Total Ran Passed Failed Inactive 00:11:57.900 suites 1 1 n/a 0 0 00:11:57.900 tests 20 20 20 0 0 00:11:57.900 asserts 204 204 204 0 n/a 00:11:57.900 00:11:57.900 Elapsed time = 0.010 seconds 00:11:59.844 ************************************ 00:11:59.844 END TEST accel_dif_functional_tests 00:11:59.844 ************************************ 00:11:59.844 00:11:59.844 real 0m2.698s 00:11:59.844 user 0m5.369s 00:11:59.844 sys 0m0.387s 00:11:59.845 06:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.845 06:04:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.845 00:11:59.845 real 2m9.805s 00:11:59.845 user 2m19.600s 00:11:59.845 sys 0m13.666s 00:11:59.845 06:04:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.845 ************************************ 00:11:59.845 END TEST accel 00:11:59.845 ************************************ 00:11:59.845 06:04:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.845 06:04:30 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:59.845 06:04:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:59.845 06:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:59.845 06:04:30 -- common/autotest_common.sh@10 -- # set +x 00:11:59.845 ************************************ 00:11:59.845 START TEST accel_rpc 00:11:59.845 ************************************ 00:11:59.845 06:04:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:59.845 * Looking for test storage... 00:11:59.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:59.845 06:04:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:59.845 06:04:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=108731 00:11:59.845 06:04:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 108731 00:11:59.845 06:04:30 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:59.845 06:04:30 -- common/autotest_common.sh@819 -- # '[' -z 108731 ']' 00:11:59.845 06:04:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.845 06:04:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:59.845 06:04:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.845 06:04:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:59.845 06:04:30 -- common/autotest_common.sh@10 -- # set +x 00:12:00.103 [2024-06-11 06:04:30.533660] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:00.103 [2024-06-11 06:04:30.534205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108731 ] 00:12:00.103 [2024-06-11 06:04:30.715213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.361 [2024-06-11 06:04:30.959523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:00.361 [2024-06-11 06:04:30.959975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.928 06:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:00.928 06:04:31 -- common/autotest_common.sh@852 -- # return 0 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:00.928 06:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:00.928 06:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:00.928 06:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.928 ************************************ 00:12:00.928 START TEST accel_assign_opcode 00:12:00.928 ************************************ 00:12:00.928 06:04:31 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:00.928 06:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.928 06:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.928 [2024-06-11 06:04:31.384929] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:00.928 06:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:00.928 06:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.928 06:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.928 [2024-06-11 06:04:31.396895] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:00.928 06:04:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:00.928 06:04:31 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:00.928 06:04:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:00.928 06:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:01.864 06:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.864 06:04:32 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:01.864 06:04:32 -- accel/accel_rpc.sh@42 -- # grep software 00:12:01.864 06:04:32 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:01.864 06:04:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:01.864 06:04:32 -- common/autotest_common.sh@10 -- # set +x 00:12:01.864 06:04:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:01.864 software 00:12:01.864 ************************************ 00:12:01.864 END TEST accel_assign_opcode 00:12:01.864 ************************************ 00:12:01.864 00:12:01.864 real 0m0.952s 00:12:01.864 user 0m0.052s 00:12:01.864 sys 0m0.009s 00:12:01.864 06:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.864 06:04:32 -- common/autotest_common.sh@10 -- # set +x 00:12:01.864 06:04:32 -- accel/accel_rpc.sh@55 -- # killprocess 108731 00:12:01.864 06:04:32 -- common/autotest_common.sh@926 -- # '[' -z 108731 ']' 00:12:01.864 06:04:32 -- common/autotest_common.sh@930 -- # kill -0 108731 00:12:01.864 06:04:32 -- common/autotest_common.sh@931 -- # uname 00:12:01.864 06:04:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:01.864 06:04:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108731 00:12:01.864 06:04:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:01.864 06:04:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:01.864 06:04:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108731' 00:12:01.864 killing process with pid 108731 00:12:01.864 06:04:32 -- common/autotest_common.sh@945 -- # kill 108731 00:12:01.864 06:04:32 -- common/autotest_common.sh@950 -- # wait 108731 00:12:05.147 00:12:05.147 real 0m4.864s 00:12:05.147 user 0m4.600s 00:12:05.147 sys 0m0.747s 00:12:05.147 06:04:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.147 06:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.147 ************************************ 00:12:05.147 END TEST accel_rpc 00:12:05.147 ************************************ 00:12:05.147 06:04:35 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:05.147 06:04:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:05.147 06:04:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:05.147 06:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.147 ************************************ 00:12:05.147 START TEST app_cmdline 00:12:05.147 ************************************ 00:12:05.147 06:04:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:05.147 * Looking for test storage... 00:12:05.147 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:05.147 06:04:35 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:05.147 06:04:35 -- app/cmdline.sh@17 -- # spdk_tgt_pid=108871 00:12:05.147 06:04:35 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:05.147 06:04:35 -- app/cmdline.sh@18 -- # waitforlisten 108871 00:12:05.147 06:04:35 -- common/autotest_common.sh@819 -- # '[' -z 108871 ']' 00:12:05.147 06:04:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.147 06:04:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:05.147 06:04:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.147 06:04:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:05.147 06:04:35 -- common/autotest_common.sh@10 -- # set +x 00:12:05.147 [2024-06-11 06:04:35.418084] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:05.147 [2024-06-11 06:04:35.419253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108871 ] 00:12:05.147 [2024-06-11 06:04:35.604973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.404 [2024-06-11 06:04:35.896255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.404 [2024-06-11 06:04:35.896525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.772 06:04:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:06.772 06:04:37 -- common/autotest_common.sh@852 -- # return 0 00:12:06.772 06:04:37 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:06.772 { 00:12:06.772 "version": "SPDK v24.01.1-pre git sha1 130b9406a", 00:12:06.772 "fields": { 00:12:06.772 "major": 24, 00:12:06.772 "minor": 1, 00:12:06.772 "patch": 1, 00:12:06.772 "suffix": "-pre", 00:12:06.772 "commit": "130b9406a" 00:12:06.772 } 00:12:06.772 } 00:12:07.031 06:04:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:07.031 06:04:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:07.031 06:04:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:07.031 06:04:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:07.031 06:04:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:07.031 06:04:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:07.031 06:04:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:07.031 06:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:07.031 06:04:37 -- app/cmdline.sh@26 -- # sort 00:12:07.031 06:04:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:07.031 06:04:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:07.031 06:04:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:07.031 06:04:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:07.031 06:04:37 -- common/autotest_common.sh@640 -- # local es=0 00:12:07.031 06:04:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:07.031 06:04:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.031 06:04:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:07.031 06:04:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.031 06:04:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:07.031 06:04:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.031 06:04:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:07.031 06:04:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.031 06:04:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:07.031 06:04:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:07.288 request: 00:12:07.288 { 00:12:07.288 "method": "env_dpdk_get_mem_stats", 00:12:07.288 "req_id": 1 00:12:07.288 } 00:12:07.288 Got JSON-RPC error response 00:12:07.288 response: 00:12:07.288 { 00:12:07.288 "code": -32601, 00:12:07.288 "message": "Method not found" 00:12:07.288 } 00:12:07.288 06:04:37 -- common/autotest_common.sh@643 -- # es=1 00:12:07.288 06:04:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:07.288 06:04:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:07.288 06:04:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:07.288 06:04:37 -- app/cmdline.sh@1 -- # killprocess 108871 00:12:07.288 06:04:37 -- common/autotest_common.sh@926 -- # '[' -z 108871 ']' 00:12:07.288 06:04:37 -- common/autotest_common.sh@930 -- # kill -0 108871 00:12:07.288 06:04:37 -- common/autotest_common.sh@931 -- # uname 00:12:07.288 06:04:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.288 06:04:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108871 00:12:07.288 06:04:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:07.288 06:04:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:07.288 killing process with pid 108871 00:12:07.288 06:04:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108871' 00:12:07.288 06:04:37 -- common/autotest_common.sh@945 -- # kill 108871 00:12:07.288 06:04:37 -- common/autotest_common.sh@950 -- # wait 108871 00:12:10.568 00:12:10.568 real 0m5.345s 00:12:10.568 user 0m5.866s 00:12:10.568 sys 0m0.794s 00:12:10.568 06:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.568 06:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 ************************************ 00:12:10.568 END TEST app_cmdline 00:12:10.568 ************************************ 00:12:10.568 06:04:40 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:10.568 06:04:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:10.568 06:04:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:10.568 06:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 ************************************ 00:12:10.568 START TEST version 00:12:10.568 ************************************ 00:12:10.568 06:04:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:10.568 * Looking for test storage... 00:12:10.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:10.568 06:04:40 -- app/version.sh@17 -- # get_header_version major 00:12:10.568 06:04:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:10.568 06:04:40 -- app/version.sh@14 -- # cut -f2 00:12:10.568 06:04:40 -- app/version.sh@14 -- # tr -d '"' 00:12:10.568 06:04:40 -- app/version.sh@17 -- # major=24 00:12:10.568 06:04:40 -- app/version.sh@18 -- # get_header_version minor 00:12:10.568 06:04:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:10.568 06:04:40 -- app/version.sh@14 -- # cut -f2 00:12:10.568 06:04:40 -- app/version.sh@14 -- # tr -d '"' 00:12:10.568 06:04:40 -- app/version.sh@18 -- # minor=1 00:12:10.568 06:04:40 -- app/version.sh@19 -- # get_header_version patch 00:12:10.568 06:04:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:10.568 06:04:40 -- app/version.sh@14 -- # cut -f2 00:12:10.568 06:04:40 -- app/version.sh@14 -- # tr -d '"' 00:12:10.568 06:04:40 -- app/version.sh@19 -- # patch=1 00:12:10.568 06:04:40 -- app/version.sh@20 -- # get_header_version suffix 00:12:10.568 06:04:40 -- app/version.sh@14 -- # cut -f2 00:12:10.568 06:04:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:10.568 06:04:40 -- app/version.sh@14 -- # tr -d '"' 00:12:10.568 06:04:40 -- app/version.sh@20 -- # suffix=-pre 00:12:10.568 06:04:40 -- app/version.sh@22 -- # version=24.1 00:12:10.568 06:04:40 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:10.568 06:04:40 -- app/version.sh@25 -- # version=24.1.1 00:12:10.568 06:04:40 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:10.568 06:04:40 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:10.568 06:04:40 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:10.568 06:04:40 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:10.568 06:04:40 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:10.568 00:12:10.568 real 0m0.164s 00:12:10.568 user 0m0.115s 00:12:10.568 sys 0m0.098s 00:12:10.568 06:04:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.568 06:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 ************************************ 00:12:10.568 END TEST version 00:12:10.568 ************************************ 00:12:10.568 06:04:40 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:10.568 06:04:40 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:10.568 06:04:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:10.568 06:04:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:10.568 06:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 ************************************ 00:12:10.568 START TEST blockdev_general 00:12:10.568 ************************************ 00:12:10.568 06:04:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:10.568 * Looking for test storage... 00:12:10.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:10.568 06:04:40 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:10.568 06:04:40 -- bdev/nbd_common.sh@6 -- # set -e 00:12:10.568 06:04:40 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:10.568 06:04:40 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:10.568 06:04:40 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:10.568 06:04:40 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:10.568 06:04:40 -- bdev/blockdev.sh@18 -- # : 00:12:10.568 06:04:40 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:10.568 06:04:40 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:10.568 06:04:40 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:10.568 06:04:40 -- bdev/blockdev.sh@672 -- # uname -s 00:12:10.568 06:04:40 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:10.568 06:04:40 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:10.568 06:04:40 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:10.568 06:04:40 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:10.568 06:04:40 -- bdev/blockdev.sh@682 -- # dek= 00:12:10.568 06:04:40 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:10.568 06:04:40 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:10.568 06:04:40 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:10.568 06:04:40 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:10.568 06:04:40 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:10.568 06:04:40 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:10.568 06:04:40 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=109072 00:12:10.568 06:04:40 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:10.568 06:04:40 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:10.568 06:04:40 -- bdev/blockdev.sh@47 -- # waitforlisten 109072 00:12:10.568 06:04:40 -- common/autotest_common.sh@819 -- # '[' -z 109072 ']' 00:12:10.568 06:04:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.568 06:04:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:10.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.568 06:04:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.568 06:04:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:10.568 06:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:10.568 [2024-06-11 06:04:41.074649] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:10.568 [2024-06-11 06:04:41.075518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109072 ] 00:12:10.827 [2024-06-11 06:04:41.258045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.086 [2024-06-11 06:04:41.500178] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:11.086 [2024-06-11 06:04:41.500407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.344 06:04:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:11.344 06:04:41 -- common/autotest_common.sh@852 -- # return 0 00:12:11.344 06:04:41 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:11.344 06:04:41 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:11.344 06:04:41 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:11.344 06:04:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.344 06:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:12.280 [2024-06-11 06:04:42.852924] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:12.280 [2024-06-11 06:04:42.853009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:12.280 00:12:12.280 [2024-06-11 06:04:42.860885] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:12.280 [2024-06-11 06:04:42.860934] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:12.280 00:12:12.280 Malloc0 00:12:12.564 Malloc1 00:12:12.564 Malloc2 00:12:12.564 Malloc3 00:12:12.564 Malloc4 00:12:12.564 Malloc5 00:12:12.822 Malloc6 00:12:12.822 Malloc7 00:12:12.822 Malloc8 00:12:12.822 Malloc9 00:12:12.822 [2024-06-11 06:04:43.405649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:12.822 [2024-06-11 06:04:43.405768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.822 [2024-06-11 06:04:43.405817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:12:12.822 [2024-06-11 06:04:43.405856] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.822 [2024-06-11 06:04:43.408963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.822 [2024-06-11 06:04:43.409040] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:12.822 TestPT 00:12:12.822 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:12.822 06:04:43 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:13.080 5000+0 records in 00:12:13.081 5000+0 records out 00:12:13.081 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0346973 s, 295 MB/s 00:12:13.081 06:04:43 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:13.081 06:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.081 06:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 AIO0 00:12:13.081 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.081 06:04:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:13.081 06:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.081 06:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.081 06:04:43 -- bdev/blockdev.sh@738 -- # cat 00:12:13.081 06:04:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:13.081 06:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.081 06:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.081 06:04:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:13.081 06:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.081 06:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.081 06:04:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:13.081 06:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.081 06:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.081 06:04:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:13.081 06:04:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:13.081 06:04:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:13.081 06:04:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.081 06:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:13.081 06:04:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.081 06:04:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:13.081 06:04:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:13.082 06:04:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3faa221f-ff43-4e73-ac35-693f2d58b974"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3faa221f-ff43-4e73-ac35-693f2d58b974",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "62487afa-6a6e-5b26-b001-e361a0a211cd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "62487afa-6a6e-5b26-b001-e361a0a211cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b7d14b28-8336-5221-9f98-4b2177966fef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b7d14b28-8336-5221-9f98-4b2177966fef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "df6b0705-d8bf-586e-8ba3-d697629d7457"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "df6b0705-d8bf-586e-8ba3-d697629d7457",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "0f74f7c8-477a-56da-97bc-0b63536e343a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f74f7c8-477a-56da-97bc-0b63536e343a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "e2e0af42-b4cd-5cc5-84a2-dc50ea7fc4f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e2e0af42-b4cd-5cc5-84a2-dc50ea7fc4f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "52b756d4-7c55-5061-9ab9-5fa4cd528512"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "52b756d4-7c55-5061-9ab9-5fa4cd528512",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a386dd92-3e0a-554a-891a-67bb6c61466b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a386dd92-3e0a-554a-891a-67bb6c61466b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "f376bbc3-a869-589d-9d90-10ad9fad7683"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f376bbc3-a869-589d-9d90-10ad9fad7683",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a2fb79fd-2cd0-55e2-a40b-ae9515452241"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a2fb79fd-2cd0-55e2-a40b-ae9515452241",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dd5763d4-84f6-551e-b165-9c74f7134cb6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dd5763d4-84f6-551e-b165-9c74f7134cb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b7c8f4b7-2765-5b09-a737-bfad49ed4c15"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b7c8f4b7-2765-5b09-a737-bfad49ed4c15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "bd112e3d-8639-4b74-85df-a456d10c04f8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bd112e3d-8639-4b74-85df-a456d10c04f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bd112e3d-8639-4b74-85df-a456d10c04f8",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "69370a72-131e-4f8f-9d8b-c8226a7f9d8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "85ac982e-eac3-4cca-a522-ce08fbf7e183",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "70254839-f488-414c-85d9-7e9fc9c5c699"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "70254839-f488-414c-85d9-7e9fc9c5c699",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "70254839-f488-414c-85d9-7e9fc9c5c699",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "2b90cf78-8401-438d-bdcc-603601cca30e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d071178d-ad4e-4a6a-a5df-6da8abd53a9f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "914e0fbb-862e-4918-b383-93c80ee74f32"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "914e0fbb-862e-4918-b383-93c80ee74f32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "914e0fbb-862e-4918-b383-93c80ee74f32",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af09a647-3e16-4dd6-9431-bb02731d73db",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c98a4a12-b5a8-495f-bada-079a8c278bc5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4faf661e-3bae-41a2-bb11-f50d22355afd"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4faf661e-3bae-41a2-bb11-f50d22355afd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:13.339 06:04:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:13.339 06:04:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:13.339 06:04:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:13.339 06:04:43 -- bdev/blockdev.sh@752 -- # killprocess 109072 00:12:13.339 06:04:43 -- common/autotest_common.sh@926 -- # '[' -z 109072 ']' 00:12:13.339 06:04:43 -- common/autotest_common.sh@930 -- # kill -0 109072 00:12:13.339 06:04:43 -- common/autotest_common.sh@931 -- # uname 00:12:13.339 06:04:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:13.339 06:04:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109072 00:12:13.339 06:04:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:13.339 06:04:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:13.339 killing process with pid 109072 00:12:13.339 06:04:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109072' 00:12:13.339 06:04:43 -- common/autotest_common.sh@945 -- # kill 109072 00:12:13.339 06:04:43 -- common/autotest_common.sh@950 -- # wait 109072 00:12:17.531 06:04:47 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:17.531 06:04:47 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:17.531 06:04:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:17.531 06:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:17.531 06:04:47 -- common/autotest_common.sh@10 -- # set +x 00:12:17.531 ************************************ 00:12:17.531 START TEST bdev_hello_world 00:12:17.531 ************************************ 00:12:17.531 06:04:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:17.531 [2024-06-11 06:04:47.648532] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:17.531 [2024-06-11 06:04:47.648742] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109174 ] 00:12:17.531 [2024-06-11 06:04:47.831064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.531 [2024-06-11 06:04:48.103138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.098 [2024-06-11 06:04:48.632664] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:18.098 [2024-06-11 06:04:48.632785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:18.098 [2024-06-11 06:04:48.640616] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:18.098 [2024-06-11 06:04:48.640689] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:18.098 [2024-06-11 06:04:48.648630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:18.098 [2024-06-11 06:04:48.648685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:18.098 [2024-06-11 06:04:48.648733] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:18.357 [2024-06-11 06:04:48.928821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:18.357 [2024-06-11 06:04:48.928997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:18.357 [2024-06-11 06:04:48.929061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:18.357 [2024-06-11 06:04:48.929094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:18.357 [2024-06-11 06:04:48.932073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:18.357 [2024-06-11 06:04:48.932134] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:18.925 [2024-06-11 06:04:49.378993] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:18.925 [2024-06-11 06:04:49.379135] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:18.925 [2024-06-11 06:04:49.379283] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:18.925 [2024-06-11 06:04:49.379402] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:18.925 [2024-06-11 06:04:49.379545] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:18.925 [2024-06-11 06:04:49.379610] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:18.925 [2024-06-11 06:04:49.379713] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:18.925 00:12:18.925 [2024-06-11 06:04:49.379792] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:22.299 00:12:22.299 real 0m4.925s 00:12:22.299 user 0m4.226s 00:12:22.299 sys 0m0.557s 00:12:22.299 06:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.299 06:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.299 ************************************ 00:12:22.299 END TEST bdev_hello_world 00:12:22.299 ************************************ 00:12:22.299 06:04:52 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:22.299 06:04:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:22.299 06:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.299 06:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.299 ************************************ 00:12:22.299 START TEST bdev_bounds 00:12:22.299 ************************************ 00:12:22.299 06:04:52 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:22.299 06:04:52 -- bdev/blockdev.sh@288 -- # bdevio_pid=109257 00:12:22.299 06:04:52 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:22.299 06:04:52 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:22.299 06:04:52 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 109257' 00:12:22.299 Process bdevio pid: 109257 00:12:22.299 06:04:52 -- bdev/blockdev.sh@291 -- # waitforlisten 109257 00:12:22.299 06:04:52 -- common/autotest_common.sh@819 -- # '[' -z 109257 ']' 00:12:22.299 06:04:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.299 06:04:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.299 06:04:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.299 06:04:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.299 06:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.299 [2024-06-11 06:04:52.654609] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:22.299 [2024-06-11 06:04:52.654849] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109257 ] 00:12:22.299 [2024-06-11 06:04:52.854954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.557 [2024-06-11 06:04:53.127605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.557 [2024-06-11 06:04:53.128683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.557 [2024-06-11 06:04:53.128661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.123 [2024-06-11 06:04:53.627026] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:23.123 [2024-06-11 06:04:53.627133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:23.123 [2024-06-11 06:04:53.634983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:23.123 [2024-06-11 06:04:53.635067] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:23.123 [2024-06-11 06:04:53.643027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:23.123 [2024-06-11 06:04:53.643107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:23.123 [2024-06-11 06:04:53.643133] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:23.382 [2024-06-11 06:04:53.885182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:23.382 [2024-06-11 06:04:53.885353] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.382 [2024-06-11 06:04:53.885410] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:23.382 [2024-06-11 06:04:53.885433] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.382 [2024-06-11 06:04:53.888261] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.382 [2024-06-11 06:04:53.888309] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:23.950 06:04:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.950 06:04:54 -- common/autotest_common.sh@852 -- # return 0 00:12:23.950 06:04:54 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:23.950 I/O targets: 00:12:23.950 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:23.950 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:23.950 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:23.950 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:23.950 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:23.950 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:23.950 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:23.950 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:23.950 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:23.950 00:12:23.950 00:12:23.950 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.950 http://cunit.sourceforge.net/ 00:12:23.950 00:12:23.950 00:12:23.950 Suite: bdevio tests on: AIO0 00:12:23.950 Test: blockdev write read block ...passed 00:12:23.950 Test: blockdev write zeroes read block ...passed 00:12:23.950 Test: blockdev write zeroes read no split ...passed 00:12:23.950 Test: blockdev write zeroes read split ...passed 00:12:23.950 Test: blockdev write zeroes read split partial ...passed 00:12:23.950 Test: blockdev reset ...passed 00:12:23.950 Test: blockdev write read 8 blocks ...passed 00:12:23.950 Test: blockdev write read size > 128k ...passed 00:12:23.950 Test: blockdev write read invalid size ...passed 00:12:23.950 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.950 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.950 Test: blockdev write read max offset ...passed 00:12:23.950 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:23.950 Test: blockdev writev readv 8 blocks ...passed 00:12:23.950 Test: blockdev writev readv 30 x 1block ...passed 00:12:23.950 Test: blockdev writev readv block ...passed 00:12:23.950 Test: blockdev writev readv size > 128k ...passed 00:12:23.950 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:23.950 Test: blockdev comparev and writev ...passed 00:12:23.950 Test: blockdev nvme passthru rw ...passed 00:12:23.950 Test: blockdev nvme passthru vendor specific ...passed 00:12:23.950 Test: blockdev nvme admin passthru ...passed 00:12:23.950 Test: blockdev copy ...passed 00:12:23.950 Suite: bdevio tests on: raid1 00:12:23.950 Test: blockdev write read block ...passed 00:12:23.950 Test: blockdev write zeroes read block ...passed 00:12:23.950 Test: blockdev write zeroes read no split ...passed 00:12:23.950 Test: blockdev write zeroes read split ...passed 00:12:24.210 Test: blockdev write zeroes read split partial ...passed 00:12:24.210 Test: blockdev reset ...passed 00:12:24.210 Test: blockdev write read 8 blocks ...passed 00:12:24.210 Test: blockdev write read size > 128k ...passed 00:12:24.210 Test: blockdev write read invalid size ...passed 00:12:24.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.210 Test: blockdev write read max offset ...passed 00:12:24.210 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.210 Test: blockdev writev readv 8 blocks ...passed 00:12:24.210 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.210 Test: blockdev writev readv block ...passed 00:12:24.210 Test: blockdev writev readv size > 128k ...passed 00:12:24.210 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.210 Test: blockdev comparev and writev ...passed 00:12:24.210 Test: blockdev nvme passthru rw ...passed 00:12:24.210 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.210 Test: blockdev nvme admin passthru ...passed 00:12:24.210 Test: blockdev copy ...passed 00:12:24.210 Suite: bdevio tests on: concat0 00:12:24.210 Test: blockdev write read block ...passed 00:12:24.210 Test: blockdev write zeroes read block ...passed 00:12:24.210 Test: blockdev write zeroes read no split ...passed 00:12:24.210 Test: blockdev write zeroes read split ...passed 00:12:24.210 Test: blockdev write zeroes read split partial ...passed 00:12:24.210 Test: blockdev reset ...passed 00:12:24.210 Test: blockdev write read 8 blocks ...passed 00:12:24.210 Test: blockdev write read size > 128k ...passed 00:12:24.210 Test: blockdev write read invalid size ...passed 00:12:24.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.210 Test: blockdev write read max offset ...passed 00:12:24.210 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.210 Test: blockdev writev readv 8 blocks ...passed 00:12:24.210 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.210 Test: blockdev writev readv block ...passed 00:12:24.210 Test: blockdev writev readv size > 128k ...passed 00:12:24.210 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.210 Test: blockdev comparev and writev ...passed 00:12:24.210 Test: blockdev nvme passthru rw ...passed 00:12:24.210 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.210 Test: blockdev nvme admin passthru ...passed 00:12:24.210 Test: blockdev copy ...passed 00:12:24.210 Suite: bdevio tests on: raid0 00:12:24.210 Test: blockdev write read block ...passed 00:12:24.210 Test: blockdev write zeroes read block ...passed 00:12:24.210 Test: blockdev write zeroes read no split ...passed 00:12:24.210 Test: blockdev write zeroes read split ...passed 00:12:24.210 Test: blockdev write zeroes read split partial ...passed 00:12:24.210 Test: blockdev reset ...passed 00:12:24.210 Test: blockdev write read 8 blocks ...passed 00:12:24.210 Test: blockdev write read size > 128k ...passed 00:12:24.210 Test: blockdev write read invalid size ...passed 00:12:24.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.210 Test: blockdev write read max offset ...passed 00:12:24.210 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.210 Test: blockdev writev readv 8 blocks ...passed 00:12:24.210 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.210 Test: blockdev writev readv block ...passed 00:12:24.210 Test: blockdev writev readv size > 128k ...passed 00:12:24.210 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.210 Test: blockdev comparev and writev ...passed 00:12:24.210 Test: blockdev nvme passthru rw ...passed 00:12:24.210 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.210 Test: blockdev nvme admin passthru ...passed 00:12:24.210 Test: blockdev copy ...passed 00:12:24.210 Suite: bdevio tests on: TestPT 00:12:24.210 Test: blockdev write read block ...passed 00:12:24.210 Test: blockdev write zeroes read block ...passed 00:12:24.210 Test: blockdev write zeroes read no split ...passed 00:12:24.210 Test: blockdev write zeroes read split ...passed 00:12:24.470 Test: blockdev write zeroes read split partial ...passed 00:12:24.470 Test: blockdev reset ...passed 00:12:24.470 Test: blockdev write read 8 blocks ...passed 00:12:24.470 Test: blockdev write read size > 128k ...passed 00:12:24.470 Test: blockdev write read invalid size ...passed 00:12:24.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.470 Test: blockdev write read max offset ...passed 00:12:24.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.470 Test: blockdev writev readv 8 blocks ...passed 00:12:24.470 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.470 Test: blockdev writev readv block ...passed 00:12:24.470 Test: blockdev writev readv size > 128k ...passed 00:12:24.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.470 Test: blockdev comparev and writev ...passed 00:12:24.470 Test: blockdev nvme passthru rw ...passed 00:12:24.470 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.470 Test: blockdev nvme admin passthru ...passed 00:12:24.470 Test: blockdev copy ...passed 00:12:24.470 Suite: bdevio tests on: Malloc2p7 00:12:24.470 Test: blockdev write read block ...passed 00:12:24.470 Test: blockdev write zeroes read block ...passed 00:12:24.470 Test: blockdev write zeroes read no split ...passed 00:12:24.470 Test: blockdev write zeroes read split ...passed 00:12:24.470 Test: blockdev write zeroes read split partial ...passed 00:12:24.470 Test: blockdev reset ...passed 00:12:24.470 Test: blockdev write read 8 blocks ...passed 00:12:24.470 Test: blockdev write read size > 128k ...passed 00:12:24.470 Test: blockdev write read invalid size ...passed 00:12:24.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.470 Test: blockdev write read max offset ...passed 00:12:24.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.470 Test: blockdev writev readv 8 blocks ...passed 00:12:24.470 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.470 Test: blockdev writev readv block ...passed 00:12:24.470 Test: blockdev writev readv size > 128k ...passed 00:12:24.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.470 Test: blockdev comparev and writev ...passed 00:12:24.470 Test: blockdev nvme passthru rw ...passed 00:12:24.470 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.470 Test: blockdev nvme admin passthru ...passed 00:12:24.470 Test: blockdev copy ...passed 00:12:24.470 Suite: bdevio tests on: Malloc2p6 00:12:24.470 Test: blockdev write read block ...passed 00:12:24.470 Test: blockdev write zeroes read block ...passed 00:12:24.470 Test: blockdev write zeroes read no split ...passed 00:12:24.470 Test: blockdev write zeroes read split ...passed 00:12:24.470 Test: blockdev write zeroes read split partial ...passed 00:12:24.470 Test: blockdev reset ...passed 00:12:24.470 Test: blockdev write read 8 blocks ...passed 00:12:24.470 Test: blockdev write read size > 128k ...passed 00:12:24.470 Test: blockdev write read invalid size ...passed 00:12:24.470 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.470 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.470 Test: blockdev write read max offset ...passed 00:12:24.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.470 Test: blockdev writev readv 8 blocks ...passed 00:12:24.470 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.470 Test: blockdev writev readv block ...passed 00:12:24.470 Test: blockdev writev readv size > 128k ...passed 00:12:24.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.470 Test: blockdev comparev and writev ...passed 00:12:24.470 Test: blockdev nvme passthru rw ...passed 00:12:24.470 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.470 Test: blockdev nvme admin passthru ...passed 00:12:24.470 Test: blockdev copy ...passed 00:12:24.470 Suite: bdevio tests on: Malloc2p5 00:12:24.470 Test: blockdev write read block ...passed 00:12:24.470 Test: blockdev write zeroes read block ...passed 00:12:24.470 Test: blockdev write zeroes read no split ...passed 00:12:24.470 Test: blockdev write zeroes read split ...passed 00:12:24.470 Test: blockdev write zeroes read split partial ...passed 00:12:24.470 Test: blockdev reset ...passed 00:12:24.470 Test: blockdev write read 8 blocks ...passed 00:12:24.729 Test: blockdev write read size > 128k ...passed 00:12:24.729 Test: blockdev write read invalid size ...passed 00:12:24.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.729 Test: blockdev write read max offset ...passed 00:12:24.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.729 Test: blockdev writev readv 8 blocks ...passed 00:12:24.729 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.729 Test: blockdev writev readv block ...passed 00:12:24.729 Test: blockdev writev readv size > 128k ...passed 00:12:24.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.729 Test: blockdev comparev and writev ...passed 00:12:24.729 Test: blockdev nvme passthru rw ...passed 00:12:24.729 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.729 Test: blockdev nvme admin passthru ...passed 00:12:24.729 Test: blockdev copy ...passed 00:12:24.729 Suite: bdevio tests on: Malloc2p4 00:12:24.729 Test: blockdev write read block ...passed 00:12:24.729 Test: blockdev write zeroes read block ...passed 00:12:24.729 Test: blockdev write zeroes read no split ...passed 00:12:24.729 Test: blockdev write zeroes read split ...passed 00:12:24.729 Test: blockdev write zeroes read split partial ...passed 00:12:24.729 Test: blockdev reset ...passed 00:12:24.729 Test: blockdev write read 8 blocks ...passed 00:12:24.729 Test: blockdev write read size > 128k ...passed 00:12:24.729 Test: blockdev write read invalid size ...passed 00:12:24.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.729 Test: blockdev write read max offset ...passed 00:12:24.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.729 Test: blockdev writev readv 8 blocks ...passed 00:12:24.729 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.729 Test: blockdev writev readv block ...passed 00:12:24.729 Test: blockdev writev readv size > 128k ...passed 00:12:24.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.729 Test: blockdev comparev and writev ...passed 00:12:24.729 Test: blockdev nvme passthru rw ...passed 00:12:24.729 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.729 Test: blockdev nvme admin passthru ...passed 00:12:24.729 Test: blockdev copy ...passed 00:12:24.729 Suite: bdevio tests on: Malloc2p3 00:12:24.729 Test: blockdev write read block ...passed 00:12:24.729 Test: blockdev write zeroes read block ...passed 00:12:24.729 Test: blockdev write zeroes read no split ...passed 00:12:24.729 Test: blockdev write zeroes read split ...passed 00:12:24.729 Test: blockdev write zeroes read split partial ...passed 00:12:24.729 Test: blockdev reset ...passed 00:12:24.729 Test: blockdev write read 8 blocks ...passed 00:12:24.729 Test: blockdev write read size > 128k ...passed 00:12:24.729 Test: blockdev write read invalid size ...passed 00:12:24.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.729 Test: blockdev write read max offset ...passed 00:12:24.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.729 Test: blockdev writev readv 8 blocks ...passed 00:12:24.729 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.729 Test: blockdev writev readv block ...passed 00:12:24.729 Test: blockdev writev readv size > 128k ...passed 00:12:24.729 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.729 Test: blockdev comparev and writev ...passed 00:12:24.729 Test: blockdev nvme passthru rw ...passed 00:12:24.729 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.729 Test: blockdev nvme admin passthru ...passed 00:12:24.729 Test: blockdev copy ...passed 00:12:24.729 Suite: bdevio tests on: Malloc2p2 00:12:24.729 Test: blockdev write read block ...passed 00:12:24.729 Test: blockdev write zeroes read block ...passed 00:12:24.729 Test: blockdev write zeroes read no split ...passed 00:12:24.729 Test: blockdev write zeroes read split ...passed 00:12:24.729 Test: blockdev write zeroes read split partial ...passed 00:12:24.729 Test: blockdev reset ...passed 00:12:24.729 Test: blockdev write read 8 blocks ...passed 00:12:24.729 Test: blockdev write read size > 128k ...passed 00:12:24.729 Test: blockdev write read invalid size ...passed 00:12:24.729 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.729 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.729 Test: blockdev write read max offset ...passed 00:12:24.729 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.729 Test: blockdev writev readv 8 blocks ...passed 00:12:24.729 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.729 Test: blockdev writev readv block ...passed 00:12:24.730 Test: blockdev writev readv size > 128k ...passed 00:12:24.730 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.730 Test: blockdev comparev and writev ...passed 00:12:24.730 Test: blockdev nvme passthru rw ...passed 00:12:24.730 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.730 Test: blockdev nvme admin passthru ...passed 00:12:24.730 Test: blockdev copy ...passed 00:12:24.730 Suite: bdevio tests on: Malloc2p1 00:12:24.730 Test: blockdev write read block ...passed 00:12:24.730 Test: blockdev write zeroes read block ...passed 00:12:24.730 Test: blockdev write zeroes read no split ...passed 00:12:24.989 Test: blockdev write zeroes read split ...passed 00:12:24.989 Test: blockdev write zeroes read split partial ...passed 00:12:24.989 Test: blockdev reset ...passed 00:12:24.989 Test: blockdev write read 8 blocks ...passed 00:12:24.989 Test: blockdev write read size > 128k ...passed 00:12:24.989 Test: blockdev write read invalid size ...passed 00:12:24.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.989 Test: blockdev write read max offset ...passed 00:12:24.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.989 Test: blockdev writev readv 8 blocks ...passed 00:12:24.989 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.989 Test: blockdev writev readv block ...passed 00:12:24.989 Test: blockdev writev readv size > 128k ...passed 00:12:24.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.989 Test: blockdev comparev and writev ...passed 00:12:24.989 Test: blockdev nvme passthru rw ...passed 00:12:24.989 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.989 Test: blockdev nvme admin passthru ...passed 00:12:24.989 Test: blockdev copy ...passed 00:12:24.989 Suite: bdevio tests on: Malloc2p0 00:12:24.989 Test: blockdev write read block ...passed 00:12:24.989 Test: blockdev write zeroes read block ...passed 00:12:24.989 Test: blockdev write zeroes read no split ...passed 00:12:24.989 Test: blockdev write zeroes read split ...passed 00:12:24.989 Test: blockdev write zeroes read split partial ...passed 00:12:24.989 Test: blockdev reset ...passed 00:12:24.989 Test: blockdev write read 8 blocks ...passed 00:12:24.989 Test: blockdev write read size > 128k ...passed 00:12:24.989 Test: blockdev write read invalid size ...passed 00:12:24.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.989 Test: blockdev write read max offset ...passed 00:12:24.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.989 Test: blockdev writev readv 8 blocks ...passed 00:12:24.989 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.989 Test: blockdev writev readv block ...passed 00:12:24.989 Test: blockdev writev readv size > 128k ...passed 00:12:24.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.989 Test: blockdev comparev and writev ...passed 00:12:24.989 Test: blockdev nvme passthru rw ...passed 00:12:24.989 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.989 Test: blockdev nvme admin passthru ...passed 00:12:24.989 Test: blockdev copy ...passed 00:12:24.989 Suite: bdevio tests on: Malloc1p1 00:12:24.989 Test: blockdev write read block ...passed 00:12:24.989 Test: blockdev write zeroes read block ...passed 00:12:24.989 Test: blockdev write zeroes read no split ...passed 00:12:24.989 Test: blockdev write zeroes read split ...passed 00:12:24.989 Test: blockdev write zeroes read split partial ...passed 00:12:24.989 Test: blockdev reset ...passed 00:12:24.989 Test: blockdev write read 8 blocks ...passed 00:12:24.989 Test: blockdev write read size > 128k ...passed 00:12:24.989 Test: blockdev write read invalid size ...passed 00:12:24.989 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:24.989 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:24.989 Test: blockdev write read max offset ...passed 00:12:24.989 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.989 Test: blockdev writev readv 8 blocks ...passed 00:12:24.989 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.989 Test: blockdev writev readv block ...passed 00:12:24.989 Test: blockdev writev readv size > 128k ...passed 00:12:24.989 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.989 Test: blockdev comparev and writev ...passed 00:12:24.989 Test: blockdev nvme passthru rw ...passed 00:12:24.989 Test: blockdev nvme passthru vendor specific ...passed 00:12:24.989 Test: blockdev nvme admin passthru ...passed 00:12:24.989 Test: blockdev copy ...passed 00:12:24.989 Suite: bdevio tests on: Malloc1p0 00:12:24.989 Test: blockdev write read block ...passed 00:12:24.989 Test: blockdev write zeroes read block ...passed 00:12:24.989 Test: blockdev write zeroes read no split ...passed 00:12:25.248 Test: blockdev write zeroes read split ...passed 00:12:25.248 Test: blockdev write zeroes read split partial ...passed 00:12:25.248 Test: blockdev reset ...passed 00:12:25.248 Test: blockdev write read 8 blocks ...passed 00:12:25.248 Test: blockdev write read size > 128k ...passed 00:12:25.248 Test: blockdev write read invalid size ...passed 00:12:25.248 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.248 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.248 Test: blockdev write read max offset ...passed 00:12:25.248 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.248 Test: blockdev writev readv 8 blocks ...passed 00:12:25.248 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.248 Test: blockdev writev readv block ...passed 00:12:25.248 Test: blockdev writev readv size > 128k ...passed 00:12:25.248 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.249 Test: blockdev comparev and writev ...passed 00:12:25.249 Test: blockdev nvme passthru rw ...passed 00:12:25.249 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.249 Test: blockdev nvme admin passthru ...passed 00:12:25.249 Test: blockdev copy ...passed 00:12:25.249 Suite: bdevio tests on: Malloc0 00:12:25.249 Test: blockdev write read block ...passed 00:12:25.249 Test: blockdev write zeroes read block ...passed 00:12:25.249 Test: blockdev write zeroes read no split ...passed 00:12:25.249 Test: blockdev write zeroes read split ...passed 00:12:25.249 Test: blockdev write zeroes read split partial ...passed 00:12:25.249 Test: blockdev reset ...passed 00:12:25.249 Test: blockdev write read 8 blocks ...passed 00:12:25.249 Test: blockdev write read size > 128k ...passed 00:12:25.249 Test: blockdev write read invalid size ...passed 00:12:25.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:25.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:25.249 Test: blockdev write read max offset ...passed 00:12:25.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:25.249 Test: blockdev writev readv 8 blocks ...passed 00:12:25.249 Test: blockdev writev readv 30 x 1block ...passed 00:12:25.249 Test: blockdev writev readv block ...passed 00:12:25.249 Test: blockdev writev readv size > 128k ...passed 00:12:25.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:25.249 Test: blockdev comparev and writev ...passed 00:12:25.249 Test: blockdev nvme passthru rw ...passed 00:12:25.249 Test: blockdev nvme passthru vendor specific ...passed 00:12:25.249 Test: blockdev nvme admin passthru ...passed 00:12:25.249 Test: blockdev copy ...passed 00:12:25.249 00:12:25.249 Run Summary: Type Total Ran Passed Failed Inactive 00:12:25.249 suites 16 16 n/a 0 0 00:12:25.249 tests 368 368 368 0 0 00:12:25.249 asserts 2224 2224 2224 0 n/a 00:12:25.249 00:12:25.249 Elapsed time = 3.770 seconds 00:12:25.249 0 00:12:25.249 06:04:55 -- bdev/blockdev.sh@293 -- # killprocess 109257 00:12:25.249 06:04:55 -- common/autotest_common.sh@926 -- # '[' -z 109257 ']' 00:12:25.249 06:04:55 -- common/autotest_common.sh@930 -- # kill -0 109257 00:12:25.249 06:04:55 -- common/autotest_common.sh@931 -- # uname 00:12:25.249 06:04:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:25.249 06:04:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109257 00:12:25.249 06:04:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:25.249 06:04:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:25.249 06:04:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109257' 00:12:25.249 killing process with pid 109257 00:12:25.249 06:04:55 -- common/autotest_common.sh@945 -- # kill 109257 00:12:25.249 06:04:55 -- common/autotest_common.sh@950 -- # wait 109257 00:12:28.529 06:04:58 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:28.529 00:12:28.529 real 0m5.889s 00:12:28.529 user 0m14.877s 00:12:28.529 sys 0m0.873s 00:12:28.529 06:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.529 06:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 ************************************ 00:12:28.529 END TEST bdev_bounds 00:12:28.529 ************************************ 00:12:28.529 06:04:58 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:28.529 06:04:58 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:28.529 06:04:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:28.529 06:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 ************************************ 00:12:28.529 START TEST bdev_nbd 00:12:28.529 ************************************ 00:12:28.529 06:04:58 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:28.529 06:04:58 -- bdev/blockdev.sh@298 -- # uname -s 00:12:28.529 06:04:58 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:28.529 06:04:58 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.529 06:04:58 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:28.529 06:04:58 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:28.529 06:04:58 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:28.529 06:04:58 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:28.529 06:04:58 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:28.529 06:04:58 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:28.529 06:04:58 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:28.529 06:04:58 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:28.529 06:04:58 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:28.529 06:04:58 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:28.529 06:04:58 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:28.529 06:04:58 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:28.529 06:04:58 -- bdev/blockdev.sh@316 -- # nbd_pid=109365 00:12:28.529 06:04:58 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:28.529 06:04:58 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:28.529 06:04:58 -- bdev/blockdev.sh@318 -- # waitforlisten 109365 /var/tmp/spdk-nbd.sock 00:12:28.529 06:04:58 -- common/autotest_common.sh@819 -- # '[' -z 109365 ']' 00:12:28.529 06:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:28.529 06:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:28.529 06:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:28.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:28.529 06:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:28.529 06:04:58 -- common/autotest_common.sh@10 -- # set +x 00:12:28.529 [2024-06-11 06:04:58.579787] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:12:28.529 [2024-06-11 06:04:58.579965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.529 [2024-06-11 06:04:58.760015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.529 [2024-06-11 06:04:59.079590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.096 [2024-06-11 06:04:59.619322] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.096 [2024-06-11 06:04:59.619435] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.096 [2024-06-11 06:04:59.627263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.096 [2024-06-11 06:04:59.627341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.096 [2024-06-11 06:04:59.635279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.096 [2024-06-11 06:04:59.635341] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:29.096 [2024-06-11 06:04:59.635376] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:29.354 [2024-06-11 06:04:59.887546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.354 [2024-06-11 06:04:59.887731] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.354 [2024-06-11 06:04:59.887785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:29.354 [2024-06-11 06:04:59.887815] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.354 [2024-06-11 06:04:59.890640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.354 [2024-06-11 06:04:59.890868] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:29.923 06:05:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:29.923 06:05:00 -- common/autotest_common.sh@852 -- # return 0 00:12:29.923 06:05:00 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@24 -- # local i 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:29.923 06:05:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:29.923 06:05:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:29.923 06:05:00 -- common/autotest_common.sh@857 -- # local i 00:12:29.923 06:05:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:29.923 06:05:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:29.923 06:05:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:29.923 06:05:00 -- common/autotest_common.sh@861 -- # break 00:12:29.923 06:05:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:29.923 06:05:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:29.923 06:05:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.923 1+0 records in 00:12:29.923 1+0 records out 00:12:29.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640457 s, 6.4 MB/s 00:12:29.923 06:05:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.923 06:05:00 -- common/autotest_common.sh@874 -- # size=4096 00:12:29.923 06:05:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.183 06:05:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:30.183 06:05:00 -- common/autotest_common.sh@877 -- # return 0 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:30.183 06:05:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:30.183 06:05:00 -- common/autotest_common.sh@857 -- # local i 00:12:30.183 06:05:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:30.183 06:05:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:30.183 06:05:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:30.183 06:05:00 -- common/autotest_common.sh@861 -- # break 00:12:30.183 06:05:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:30.183 06:05:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:30.183 06:05:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.183 1+0 records in 00:12:30.183 1+0 records out 00:12:30.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574479 s, 7.1 MB/s 00:12:30.183 06:05:00 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.183 06:05:00 -- common/autotest_common.sh@874 -- # size=4096 00:12:30.183 06:05:00 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.183 06:05:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:30.183 06:05:00 -- common/autotest_common.sh@877 -- # return 0 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:30.183 06:05:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:30.751 06:05:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:30.751 06:05:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:30.751 06:05:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:30.751 06:05:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:30.751 06:05:01 -- common/autotest_common.sh@857 -- # local i 00:12:30.751 06:05:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:30.751 06:05:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:30.751 06:05:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:30.751 06:05:01 -- common/autotest_common.sh@861 -- # break 00:12:30.751 06:05:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:30.751 06:05:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:30.751 06:05:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.751 1+0 records in 00:12:30.751 1+0 records out 00:12:30.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049817 s, 8.2 MB/s 00:12:30.751 06:05:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.751 06:05:01 -- common/autotest_common.sh@874 -- # size=4096 00:12:30.751 06:05:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.751 06:05:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:30.751 06:05:01 -- common/autotest_common.sh@877 -- # return 0 00:12:30.751 06:05:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:30.751 06:05:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:30.751 06:05:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:31.010 06:05:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:31.010 06:05:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:31.010 06:05:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:31.010 06:05:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:31.010 06:05:01 -- common/autotest_common.sh@857 -- # local i 00:12:31.010 06:05:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:31.010 06:05:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:31.010 06:05:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:31.010 06:05:01 -- common/autotest_common.sh@861 -- # break 00:12:31.010 06:05:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:31.010 06:05:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:31.010 06:05:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.010 1+0 records in 00:12:31.010 1+0 records out 00:12:31.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00071788 s, 5.7 MB/s 00:12:31.010 06:05:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.010 06:05:01 -- common/autotest_common.sh@874 -- # size=4096 00:12:31.010 06:05:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.010 06:05:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:31.010 06:05:01 -- common/autotest_common.sh@877 -- # return 0 00:12:31.010 06:05:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.010 06:05:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:31.011 06:05:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:31.270 06:05:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:31.270 06:05:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:31.270 06:05:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:31.270 06:05:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:31.270 06:05:01 -- common/autotest_common.sh@857 -- # local i 00:12:31.270 06:05:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:31.270 06:05:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:31.270 06:05:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:31.270 06:05:01 -- common/autotest_common.sh@861 -- # break 00:12:31.270 06:05:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:31.270 06:05:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:31.270 06:05:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.270 1+0 records in 00:12:31.270 1+0 records out 00:12:31.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653376 s, 6.3 MB/s 00:12:31.270 06:05:01 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.270 06:05:01 -- common/autotest_common.sh@874 -- # size=4096 00:12:31.270 06:05:01 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.270 06:05:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:31.270 06:05:01 -- common/autotest_common.sh@877 -- # return 0 00:12:31.270 06:05:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.270 06:05:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:31.270 06:05:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:31.529 06:05:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:31.529 06:05:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:31.529 06:05:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:31.529 06:05:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:31.529 06:05:02 -- common/autotest_common.sh@857 -- # local i 00:12:31.529 06:05:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:31.529 06:05:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:31.529 06:05:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:31.529 06:05:02 -- common/autotest_common.sh@861 -- # break 00:12:31.529 06:05:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:31.529 06:05:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:31.529 06:05:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.529 1+0 records in 00:12:31.529 1+0 records out 00:12:31.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000767208 s, 5.3 MB/s 00:12:31.529 06:05:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.787 06:05:02 -- common/autotest_common.sh@874 -- # size=4096 00:12:31.787 06:05:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.787 06:05:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:31.787 06:05:02 -- common/autotest_common.sh@877 -- # return 0 00:12:31.787 06:05:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:31.787 06:05:02 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:31.787 06:05:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:32.045 06:05:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:32.045 06:05:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:32.045 06:05:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:32.045 06:05:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:32.045 06:05:02 -- common/autotest_common.sh@857 -- # local i 00:12:32.045 06:05:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:32.045 06:05:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:32.045 06:05:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:32.045 06:05:02 -- common/autotest_common.sh@861 -- # break 00:12:32.045 06:05:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:32.045 06:05:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:32.045 06:05:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.045 1+0 records in 00:12:32.045 1+0 records out 00:12:32.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818827 s, 5.0 MB/s 00:12:32.045 06:05:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.045 06:05:02 -- common/autotest_common.sh@874 -- # size=4096 00:12:32.045 06:05:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.045 06:05:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:32.045 06:05:02 -- common/autotest_common.sh@877 -- # return 0 00:12:32.045 06:05:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.045 06:05:02 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:32.045 06:05:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:32.304 06:05:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:32.304 06:05:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:32.304 06:05:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:32.304 06:05:02 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:32.304 06:05:02 -- common/autotest_common.sh@857 -- # local i 00:12:32.304 06:05:02 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:32.304 06:05:02 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:32.304 06:05:02 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:32.304 06:05:02 -- common/autotest_common.sh@861 -- # break 00:12:32.304 06:05:02 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:32.304 06:05:02 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:32.304 06:05:02 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.304 1+0 records in 00:12:32.304 1+0 records out 00:12:32.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126758 s, 3.2 MB/s 00:12:32.304 06:05:02 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.304 06:05:02 -- common/autotest_common.sh@874 -- # size=4096 00:12:32.304 06:05:02 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.304 06:05:02 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:32.304 06:05:02 -- common/autotest_common.sh@877 -- # return 0 00:12:32.304 06:05:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.304 06:05:02 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:32.304 06:05:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:32.566 06:05:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:32.566 06:05:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:32.566 06:05:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:32.566 06:05:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:32.566 06:05:03 -- common/autotest_common.sh@857 -- # local i 00:12:32.566 06:05:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:32.566 06:05:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:32.566 06:05:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:32.566 06:05:03 -- common/autotest_common.sh@861 -- # break 00:12:32.566 06:05:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:32.566 06:05:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:32.566 06:05:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.566 1+0 records in 00:12:32.566 1+0 records out 00:12:32.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000912532 s, 4.5 MB/s 00:12:32.566 06:05:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.566 06:05:03 -- common/autotest_common.sh@874 -- # size=4096 00:12:32.566 06:05:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.566 06:05:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:32.566 06:05:03 -- common/autotest_common.sh@877 -- # return 0 00:12:32.566 06:05:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.566 06:05:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:32.566 06:05:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:32.826 06:05:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:32.826 06:05:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:32.826 06:05:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:32.826 06:05:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:32.826 06:05:03 -- common/autotest_common.sh@857 -- # local i 00:12:32.826 06:05:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:32.826 06:05:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:32.826 06:05:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:32.826 06:05:03 -- common/autotest_common.sh@861 -- # break 00:12:32.826 06:05:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:32.826 06:05:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:32.826 06:05:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.826 1+0 records in 00:12:32.826 1+0 records out 00:12:32.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00069831 s, 5.9 MB/s 00:12:32.826 06:05:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.826 06:05:03 -- common/autotest_common.sh@874 -- # size=4096 00:12:32.826 06:05:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.826 06:05:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:32.826 06:05:03 -- common/autotest_common.sh@877 -- # return 0 00:12:32.826 06:05:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.826 06:05:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:32.826 06:05:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:33.085 06:05:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:33.085 06:05:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:33.085 06:05:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:33.085 06:05:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:33.085 06:05:03 -- common/autotest_common.sh@857 -- # local i 00:12:33.085 06:05:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.085 06:05:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.085 06:05:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:33.085 06:05:03 -- common/autotest_common.sh@861 -- # break 00:12:33.085 06:05:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.085 06:05:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.085 06:05:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.343 1+0 records in 00:12:33.343 1+0 records out 00:12:33.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000927106 s, 4.4 MB/s 00:12:33.343 06:05:03 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.343 06:05:03 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.343 06:05:03 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.343 06:05:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.343 06:05:03 -- common/autotest_common.sh@877 -- # return 0 00:12:33.343 06:05:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.343 06:05:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:33.343 06:05:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:33.657 06:05:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:33.657 06:05:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:33.657 06:05:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:33.657 06:05:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:33.657 06:05:04 -- common/autotest_common.sh@857 -- # local i 00:12:33.657 06:05:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.657 06:05:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.657 06:05:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:33.657 06:05:04 -- common/autotest_common.sh@861 -- # break 00:12:33.657 06:05:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.657 06:05:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.657 06:05:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.657 1+0 records in 00:12:33.657 1+0 records out 00:12:33.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000852195 s, 4.8 MB/s 00:12:33.657 06:05:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.657 06:05:04 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.657 06:05:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.657 06:05:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.657 06:05:04 -- common/autotest_common.sh@877 -- # return 0 00:12:33.657 06:05:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.657 06:05:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:33.657 06:05:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:33.916 06:05:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:33.916 06:05:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:33.916 06:05:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:33.916 06:05:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:33.916 06:05:04 -- common/autotest_common.sh@857 -- # local i 00:12:33.916 06:05:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.916 06:05:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.916 06:05:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:33.916 06:05:04 -- common/autotest_common.sh@861 -- # break 00:12:33.916 06:05:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.916 06:05:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.916 06:05:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.916 1+0 records in 00:12:33.916 1+0 records out 00:12:33.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000857158 s, 4.8 MB/s 00:12:33.916 06:05:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.916 06:05:04 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.916 06:05:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.916 06:05:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.916 06:05:04 -- common/autotest_common.sh@877 -- # return 0 00:12:33.916 06:05:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.916 06:05:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:33.916 06:05:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:34.198 06:05:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:34.198 06:05:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:34.198 06:05:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:34.198 06:05:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:34.198 06:05:04 -- common/autotest_common.sh@857 -- # local i 00:12:34.198 06:05:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:34.198 06:05:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:34.198 06:05:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:34.198 06:05:04 -- common/autotest_common.sh@861 -- # break 00:12:34.198 06:05:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:34.198 06:05:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:34.198 06:05:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.198 1+0 records in 00:12:34.198 1+0 records out 00:12:34.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882195 s, 4.6 MB/s 00:12:34.198 06:05:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.198 06:05:04 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.198 06:05:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.198 06:05:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.198 06:05:04 -- common/autotest_common.sh@877 -- # return 0 00:12:34.198 06:05:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.198 06:05:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.198 06:05:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:34.457 06:05:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:34.457 06:05:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:34.457 06:05:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:34.457 06:05:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:34.457 06:05:04 -- common/autotest_common.sh@857 -- # local i 00:12:34.457 06:05:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:34.457 06:05:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:34.457 06:05:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:34.457 06:05:04 -- common/autotest_common.sh@861 -- # break 00:12:34.457 06:05:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:34.457 06:05:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:34.457 06:05:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.457 1+0 records in 00:12:34.457 1+0 records out 00:12:34.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00138228 s, 3.0 MB/s 00:12:34.457 06:05:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.457 06:05:04 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.457 06:05:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.457 06:05:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.457 06:05:05 -- common/autotest_common.sh@877 -- # return 0 00:12:34.457 06:05:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.457 06:05:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.457 06:05:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:34.716 06:05:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:34.716 06:05:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:34.717 06:05:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:34.717 06:05:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:34.717 06:05:05 -- common/autotest_common.sh@857 -- # local i 00:12:34.717 06:05:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:34.717 06:05:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:34.717 06:05:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:34.717 06:05:05 -- common/autotest_common.sh@861 -- # break 00:12:34.717 06:05:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:34.717 06:05:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:34.717 06:05:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.717 1+0 records in 00:12:34.717 1+0 records out 00:12:34.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143226 s, 2.9 MB/s 00:12:34.717 06:05:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.717 06:05:05 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.717 06:05:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.717 06:05:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.717 06:05:05 -- common/autotest_common.sh@877 -- # return 0 00:12:34.717 06:05:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.717 06:05:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.717 06:05:05 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:34.976 06:05:05 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd0", 00:12:34.976 "bdev_name": "Malloc0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd1", 00:12:34.976 "bdev_name": "Malloc1p0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd2", 00:12:34.976 "bdev_name": "Malloc1p1" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd3", 00:12:34.976 "bdev_name": "Malloc2p0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd4", 00:12:34.976 "bdev_name": "Malloc2p1" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd5", 00:12:34.976 "bdev_name": "Malloc2p2" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd6", 00:12:34.976 "bdev_name": "Malloc2p3" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd7", 00:12:34.976 "bdev_name": "Malloc2p4" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd8", 00:12:34.976 "bdev_name": "Malloc2p5" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd9", 00:12:34.976 "bdev_name": "Malloc2p6" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd10", 00:12:34.976 "bdev_name": "Malloc2p7" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd11", 00:12:34.976 "bdev_name": "TestPT" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd12", 00:12:34.976 "bdev_name": "raid0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd13", 00:12:34.976 "bdev_name": "concat0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd14", 00:12:34.976 "bdev_name": "raid1" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd15", 00:12:34.976 "bdev_name": "AIO0" 00:12:34.976 } 00:12:34.976 ]' 00:12:34.976 06:05:05 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:34.976 06:05:05 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:34.976 06:05:05 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd0", 00:12:34.976 "bdev_name": "Malloc0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd1", 00:12:34.976 "bdev_name": "Malloc1p0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd2", 00:12:34.976 "bdev_name": "Malloc1p1" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd3", 00:12:34.976 "bdev_name": "Malloc2p0" 00:12:34.976 }, 00:12:34.976 { 00:12:34.976 "nbd_device": "/dev/nbd4", 00:12:34.977 "bdev_name": "Malloc2p1" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd5", 00:12:34.977 "bdev_name": "Malloc2p2" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd6", 00:12:34.977 "bdev_name": "Malloc2p3" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd7", 00:12:34.977 "bdev_name": "Malloc2p4" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd8", 00:12:34.977 "bdev_name": "Malloc2p5" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd9", 00:12:34.977 "bdev_name": "Malloc2p6" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd10", 00:12:34.977 "bdev_name": "Malloc2p7" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd11", 00:12:34.977 "bdev_name": "TestPT" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd12", 00:12:34.977 "bdev_name": "raid0" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd13", 00:12:34.977 "bdev_name": "concat0" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd14", 00:12:34.977 "bdev_name": "raid1" 00:12:34.977 }, 00:12:34.977 { 00:12:34.977 "nbd_device": "/dev/nbd15", 00:12:34.977 "bdev_name": "AIO0" 00:12:34.977 } 00:12:34.977 ]' 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@51 -- # local i 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.236 06:05:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@41 -- # break 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.496 06:05:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@41 -- # break 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.496 06:05:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@41 -- # break 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.754 06:05:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@41 -- # break 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.013 06:05:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@41 -- # break 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.273 06:05:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@41 -- # break 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.532 06:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@41 -- # break 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.791 06:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@41 -- # break 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.050 06:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@41 -- # break 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.309 06:05:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:37.568 06:05:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@41 -- # break 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.568 06:05:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@41 -- # break 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:37.828 06:05:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@41 -- # break 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.087 06:05:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@41 -- # break 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.346 06:05:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@41 -- # break 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.606 06:05:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@41 -- # break 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.868 06:05:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@41 -- # break 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.128 06:05:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@65 -- # true 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@65 -- # count=0 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@122 -- # count=0 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@127 -- # return 0 00:12:39.388 06:05:09 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@12 -- # local i 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:39.388 06:05:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:39.647 /dev/nbd0 00:12:39.647 06:05:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.647 06:05:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.647 06:05:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:39.647 06:05:10 -- common/autotest_common.sh@857 -- # local i 00:12:39.647 06:05:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.648 06:05:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.648 06:05:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:39.648 06:05:10 -- common/autotest_common.sh@861 -- # break 00:12:39.648 06:05:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.648 06:05:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.648 06:05:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.648 1+0 records in 00:12:39.648 1+0 records out 00:12:39.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037114 s, 11.0 MB/s 00:12:39.648 06:05:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.907 06:05:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.907 06:05:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.907 06:05:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.907 06:05:10 -- common/autotest_common.sh@877 -- # return 0 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:39.907 /dev/nbd1 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:39.907 06:05:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:39.907 06:05:10 -- common/autotest_common.sh@857 -- # local i 00:12:39.907 06:05:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:39.907 06:05:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:39.907 06:05:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:39.907 06:05:10 -- common/autotest_common.sh@861 -- # break 00:12:39.907 06:05:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:39.907 06:05:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:39.907 06:05:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.907 1+0 records in 00:12:39.907 1+0 records out 00:12:39.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457709 s, 8.9 MB/s 00:12:39.907 06:05:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.907 06:05:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:39.907 06:05:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.907 06:05:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:39.907 06:05:10 -- common/autotest_common.sh@877 -- # return 0 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:39.907 06:05:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:40.166 /dev/nbd10 00:12:40.166 06:05:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:40.166 06:05:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:40.166 06:05:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:40.166 06:05:10 -- common/autotest_common.sh@857 -- # local i 00:12:40.166 06:05:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.167 06:05:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.167 06:05:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:40.167 06:05:10 -- common/autotest_common.sh@861 -- # break 00:12:40.167 06:05:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.167 06:05:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.167 06:05:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.167 1+0 records in 00:12:40.167 1+0 records out 00:12:40.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481381 s, 8.5 MB/s 00:12:40.167 06:05:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.167 06:05:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.167 06:05:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.167 06:05:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.167 06:05:10 -- common/autotest_common.sh@877 -- # return 0 00:12:40.167 06:05:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.167 06:05:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:40.167 06:05:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:40.426 /dev/nbd11 00:12:40.426 06:05:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:40.426 06:05:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:40.426 06:05:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:40.426 06:05:11 -- common/autotest_common.sh@857 -- # local i 00:12:40.426 06:05:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.426 06:05:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.426 06:05:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:40.426 06:05:11 -- common/autotest_common.sh@861 -- # break 00:12:40.426 06:05:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.426 06:05:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.426 06:05:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.426 1+0 records in 00:12:40.426 1+0 records out 00:12:40.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000773583 s, 5.3 MB/s 00:12:40.426 06:05:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.426 06:05:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.426 06:05:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.426 06:05:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.426 06:05:11 -- common/autotest_common.sh@877 -- # return 0 00:12:40.426 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.426 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:40.426 06:05:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:40.686 /dev/nbd12 00:12:40.686 06:05:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:40.686 06:05:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:40.686 06:05:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:40.686 06:05:11 -- common/autotest_common.sh@857 -- # local i 00:12:40.686 06:05:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.686 06:05:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.686 06:05:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:40.686 06:05:11 -- common/autotest_common.sh@861 -- # break 00:12:40.686 06:05:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.686 06:05:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.686 06:05:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.686 1+0 records in 00:12:40.686 1+0 records out 00:12:40.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334079 s, 12.3 MB/s 00:12:40.686 06:05:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.686 06:05:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.686 06:05:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.686 06:05:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.686 06:05:11 -- common/autotest_common.sh@877 -- # return 0 00:12:40.686 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.686 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:40.686 06:05:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:40.946 /dev/nbd13 00:12:40.946 06:05:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:40.946 06:05:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:40.946 06:05:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:40.946 06:05:11 -- common/autotest_common.sh@857 -- # local i 00:12:40.946 06:05:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:40.946 06:05:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:40.946 06:05:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:40.946 06:05:11 -- common/autotest_common.sh@861 -- # break 00:12:40.946 06:05:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:40.946 06:05:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:40.946 06:05:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.946 1+0 records in 00:12:40.946 1+0 records out 00:12:40.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497688 s, 8.2 MB/s 00:12:40.946 06:05:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.946 06:05:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:40.946 06:05:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.946 06:05:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:40.946 06:05:11 -- common/autotest_common.sh@877 -- # return 0 00:12:40.946 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.946 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:40.946 06:05:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:41.205 /dev/nbd14 00:12:41.205 06:05:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:41.205 06:05:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:41.205 06:05:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:41.205 06:05:11 -- common/autotest_common.sh@857 -- # local i 00:12:41.205 06:05:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.205 06:05:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.205 06:05:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:41.205 06:05:11 -- common/autotest_common.sh@861 -- # break 00:12:41.205 06:05:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.205 06:05:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.205 06:05:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.205 1+0 records in 00:12:41.205 1+0 records out 00:12:41.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361984 s, 11.3 MB/s 00:12:41.205 06:05:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.205 06:05:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.205 06:05:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.205 06:05:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.205 06:05:11 -- common/autotest_common.sh@877 -- # return 0 00:12:41.205 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.205 06:05:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:41.205 06:05:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:41.464 /dev/nbd15 00:12:41.464 06:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:41.464 06:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:41.464 06:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:41.464 06:05:12 -- common/autotest_common.sh@857 -- # local i 00:12:41.464 06:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.464 06:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.464 06:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:41.464 06:05:12 -- common/autotest_common.sh@861 -- # break 00:12:41.464 06:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.464 06:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.464 06:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.464 1+0 records in 00:12:41.464 1+0 records out 00:12:41.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005661 s, 7.2 MB/s 00:12:41.464 06:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.464 06:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.464 06:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.464 06:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.464 06:05:12 -- common/autotest_common.sh@877 -- # return 0 00:12:41.464 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.464 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:41.464 06:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:41.722 /dev/nbd2 00:12:41.722 06:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:41.722 06:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:41.722 06:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:41.722 06:05:12 -- common/autotest_common.sh@857 -- # local i 00:12:41.722 06:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.722 06:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.722 06:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:41.722 06:05:12 -- common/autotest_common.sh@861 -- # break 00:12:41.722 06:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.722 06:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.722 06:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.722 1+0 records in 00:12:41.722 1+0 records out 00:12:41.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509425 s, 8.0 MB/s 00:12:41.722 06:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.722 06:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.722 06:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.722 06:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.722 06:05:12 -- common/autotest_common.sh@877 -- # return 0 00:12:41.722 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.722 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:41.722 06:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:41.981 /dev/nbd3 00:12:41.981 06:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:41.981 06:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:41.981 06:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:41.981 06:05:12 -- common/autotest_common.sh@857 -- # local i 00:12:41.981 06:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:41.981 06:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:41.981 06:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:41.981 06:05:12 -- common/autotest_common.sh@861 -- # break 00:12:41.981 06:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:41.981 06:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:41.981 06:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.981 1+0 records in 00:12:41.981 1+0 records out 00:12:41.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960755 s, 4.3 MB/s 00:12:41.981 06:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.981 06:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:41.981 06:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.981 06:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:41.981 06:05:12 -- common/autotest_common.sh@877 -- # return 0 00:12:41.981 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.981 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:41.981 06:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:42.239 /dev/nbd4 00:12:42.239 06:05:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:42.239 06:05:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:42.239 06:05:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:42.239 06:05:12 -- common/autotest_common.sh@857 -- # local i 00:12:42.239 06:05:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.239 06:05:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.239 06:05:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:42.239 06:05:12 -- common/autotest_common.sh@861 -- # break 00:12:42.239 06:05:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.239 06:05:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.239 06:05:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.239 1+0 records in 00:12:42.239 1+0 records out 00:12:42.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117889 s, 3.5 MB/s 00:12:42.497 06:05:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.497 06:05:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.497 06:05:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.497 06:05:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.497 06:05:12 -- common/autotest_common.sh@877 -- # return 0 00:12:42.497 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.497 06:05:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:42.497 06:05:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:42.755 /dev/nbd5 00:12:42.755 06:05:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:42.755 06:05:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:42.755 06:05:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:42.755 06:05:13 -- common/autotest_common.sh@857 -- # local i 00:12:42.755 06:05:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.755 06:05:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.755 06:05:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:42.755 06:05:13 -- common/autotest_common.sh@861 -- # break 00:12:42.755 06:05:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.755 06:05:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.755 06:05:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.755 1+0 records in 00:12:42.755 1+0 records out 00:12:42.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000766436 s, 5.3 MB/s 00:12:42.755 06:05:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.755 06:05:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.755 06:05:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.755 06:05:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.755 06:05:13 -- common/autotest_common.sh@877 -- # return 0 00:12:42.755 06:05:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.755 06:05:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:42.755 06:05:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:43.014 /dev/nbd6 00:12:43.014 06:05:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:43.014 06:05:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:43.014 06:05:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:43.014 06:05:13 -- common/autotest_common.sh@857 -- # local i 00:12:43.014 06:05:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.014 06:05:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.014 06:05:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:43.014 06:05:13 -- common/autotest_common.sh@861 -- # break 00:12:43.014 06:05:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.014 06:05:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.014 06:05:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.014 1+0 records in 00:12:43.014 1+0 records out 00:12:43.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000870166 s, 4.7 MB/s 00:12:43.014 06:05:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.014 06:05:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.014 06:05:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.014 06:05:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.014 06:05:13 -- common/autotest_common.sh@877 -- # return 0 00:12:43.014 06:05:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.014 06:05:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.014 06:05:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:43.272 /dev/nbd7 00:12:43.272 06:05:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:43.272 06:05:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:43.273 06:05:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:43.273 06:05:13 -- common/autotest_common.sh@857 -- # local i 00:12:43.273 06:05:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.273 06:05:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.273 06:05:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:43.273 06:05:13 -- common/autotest_common.sh@861 -- # break 00:12:43.273 06:05:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.273 06:05:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.273 06:05:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.273 1+0 records in 00:12:43.273 1+0 records out 00:12:43.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000977432 s, 4.2 MB/s 00:12:43.273 06:05:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.273 06:05:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.273 06:05:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.273 06:05:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.273 06:05:13 -- common/autotest_common.sh@877 -- # return 0 00:12:43.273 06:05:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.273 06:05:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.273 06:05:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:43.531 /dev/nbd8 00:12:43.531 06:05:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:43.531 06:05:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:43.531 06:05:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:43.531 06:05:14 -- common/autotest_common.sh@857 -- # local i 00:12:43.531 06:05:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.531 06:05:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.531 06:05:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:43.531 06:05:14 -- common/autotest_common.sh@861 -- # break 00:12:43.531 06:05:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.531 06:05:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.531 06:05:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.531 1+0 records in 00:12:43.531 1+0 records out 00:12:43.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143904 s, 2.8 MB/s 00:12:43.531 06:05:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.531 06:05:14 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.531 06:05:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.531 06:05:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.531 06:05:14 -- common/autotest_common.sh@877 -- # return 0 00:12:43.531 06:05:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.531 06:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.531 06:05:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:43.790 /dev/nbd9 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:43.790 06:05:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:43.790 06:05:14 -- common/autotest_common.sh@857 -- # local i 00:12:43.790 06:05:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.790 06:05:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.790 06:05:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:43.790 06:05:14 -- common/autotest_common.sh@861 -- # break 00:12:43.790 06:05:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.790 06:05:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.790 06:05:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.790 1+0 records in 00:12:43.790 1+0 records out 00:12:43.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00223163 s, 1.8 MB/s 00:12:43.790 06:05:14 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.790 06:05:14 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.790 06:05:14 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.790 06:05:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.790 06:05:14 -- common/autotest_common.sh@877 -- # return 0 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.790 06:05:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd0", 00:12:44.370 "bdev_name": "Malloc0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd1", 00:12:44.370 "bdev_name": "Malloc1p0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd10", 00:12:44.370 "bdev_name": "Malloc1p1" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd11", 00:12:44.370 "bdev_name": "Malloc2p0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd12", 00:12:44.370 "bdev_name": "Malloc2p1" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd13", 00:12:44.370 "bdev_name": "Malloc2p2" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd14", 00:12:44.370 "bdev_name": "Malloc2p3" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd15", 00:12:44.370 "bdev_name": "Malloc2p4" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd2", 00:12:44.370 "bdev_name": "Malloc2p5" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd3", 00:12:44.370 "bdev_name": "Malloc2p6" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd4", 00:12:44.370 "bdev_name": "Malloc2p7" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd5", 00:12:44.370 "bdev_name": "TestPT" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd6", 00:12:44.370 "bdev_name": "raid0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd7", 00:12:44.370 "bdev_name": "concat0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd8", 00:12:44.370 "bdev_name": "raid1" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd9", 00:12:44.370 "bdev_name": "AIO0" 00:12:44.370 } 00:12:44.370 ]' 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd0", 00:12:44.370 "bdev_name": "Malloc0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd1", 00:12:44.370 "bdev_name": "Malloc1p0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd10", 00:12:44.370 "bdev_name": "Malloc1p1" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd11", 00:12:44.370 "bdev_name": "Malloc2p0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd12", 00:12:44.370 "bdev_name": "Malloc2p1" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd13", 00:12:44.370 "bdev_name": "Malloc2p2" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd14", 00:12:44.370 "bdev_name": "Malloc2p3" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd15", 00:12:44.370 "bdev_name": "Malloc2p4" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd2", 00:12:44.370 "bdev_name": "Malloc2p5" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd3", 00:12:44.370 "bdev_name": "Malloc2p6" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd4", 00:12:44.370 "bdev_name": "Malloc2p7" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd5", 00:12:44.370 "bdev_name": "TestPT" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd6", 00:12:44.370 "bdev_name": "raid0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd7", 00:12:44.370 "bdev_name": "concat0" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd8", 00:12:44.370 "bdev_name": "raid1" 00:12:44.370 }, 00:12:44.370 { 00:12:44.370 "nbd_device": "/dev/nbd9", 00:12:44.370 "bdev_name": "AIO0" 00:12:44.370 } 00:12:44.370 ]' 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:44.370 /dev/nbd1 00:12:44.370 /dev/nbd10 00:12:44.370 /dev/nbd11 00:12:44.370 /dev/nbd12 00:12:44.370 /dev/nbd13 00:12:44.370 /dev/nbd14 00:12:44.370 /dev/nbd15 00:12:44.370 /dev/nbd2 00:12:44.370 /dev/nbd3 00:12:44.370 /dev/nbd4 00:12:44.370 /dev/nbd5 00:12:44.370 /dev/nbd6 00:12:44.370 /dev/nbd7 00:12:44.370 /dev/nbd8 00:12:44.370 /dev/nbd9' 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:44.370 /dev/nbd1 00:12:44.370 /dev/nbd10 00:12:44.370 /dev/nbd11 00:12:44.370 /dev/nbd12 00:12:44.370 /dev/nbd13 00:12:44.370 /dev/nbd14 00:12:44.370 /dev/nbd15 00:12:44.370 /dev/nbd2 00:12:44.370 /dev/nbd3 00:12:44.370 /dev/nbd4 00:12:44.370 /dev/nbd5 00:12:44.370 /dev/nbd6 00:12:44.370 /dev/nbd7 00:12:44.370 /dev/nbd8 00:12:44.370 /dev/nbd9' 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@65 -- # count=16 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@95 -- # count=16 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:44.370 06:05:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:44.371 256+0 records in 00:12:44.371 256+0 records out 00:12:44.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106515 s, 98.4 MB/s 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:44.371 256+0 records in 00:12:44.371 256+0 records out 00:12:44.371 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153009 s, 6.9 MB/s 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.371 06:05:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:44.630 256+0 records in 00:12:44.630 256+0 records out 00:12:44.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156863 s, 6.7 MB/s 00:12:44.630 06:05:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.630 06:05:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:44.630 256+0 records in 00:12:44.630 256+0 records out 00:12:44.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155554 s, 6.7 MB/s 00:12:44.630 06:05:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.630 06:05:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:44.889 256+0 records in 00:12:44.889 256+0 records out 00:12:44.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155666 s, 6.7 MB/s 00:12:44.889 06:05:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.889 06:05:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:45.147 256+0 records in 00:12:45.147 256+0 records out 00:12:45.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154821 s, 6.8 MB/s 00:12:45.147 06:05:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.147 06:05:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:45.147 256+0 records in 00:12:45.147 256+0 records out 00:12:45.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154942 s, 6.8 MB/s 00:12:45.147 06:05:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.147 06:05:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:45.407 256+0 records in 00:12:45.407 256+0 records out 00:12:45.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1043 s, 10.1 MB/s 00:12:45.407 06:05:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.407 06:05:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:45.407 256+0 records in 00:12:45.407 256+0 records out 00:12:45.407 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154834 s, 6.8 MB/s 00:12:45.407 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.407 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:45.666 256+0 records in 00:12:45.666 256+0 records out 00:12:45.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156607 s, 6.7 MB/s 00:12:45.666 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.666 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:45.925 256+0 records in 00:12:45.925 256+0 records out 00:12:45.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155695 s, 6.7 MB/s 00:12:45.925 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.925 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:45.925 256+0 records in 00:12:45.925 256+0 records out 00:12:45.925 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155991 s, 6.7 MB/s 00:12:45.925 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:45.925 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:46.184 256+0 records in 00:12:46.184 256+0 records out 00:12:46.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.155775 s, 6.7 MB/s 00:12:46.184 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:46.184 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:46.184 256+0 records in 00:12:46.184 256+0 records out 00:12:46.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154835 s, 6.8 MB/s 00:12:46.184 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:46.184 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:46.443 256+0 records in 00:12:46.443 256+0 records out 00:12:46.443 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156807 s, 6.7 MB/s 00:12:46.443 06:05:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:46.443 06:05:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:46.702 256+0 records in 00:12:46.703 256+0 records out 00:12:46.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160871 s, 6.5 MB/s 00:12:46.703 06:05:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:46.703 06:05:17 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:46.962 256+0 records in 00:12:46.962 256+0 records out 00:12:46.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.22777 s, 4.6 MB/s 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@51 -- # local i 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:46.962 06:05:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:47.221 06:05:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:47.221 06:05:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:47.221 06:05:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:47.221 06:05:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.222 06:05:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.222 06:05:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:47.222 06:05:17 -- bdev/nbd_common.sh@41 -- # break 00:12:47.222 06:05:17 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.222 06:05:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.222 06:05:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:47.480 06:05:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:47.480 06:05:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:47.480 06:05:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:47.480 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.481 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.481 06:05:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:47.739 06:05:18 -- bdev/nbd_common.sh@41 -- # break 00:12:47.739 06:05:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.739 06:05:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.739 06:05:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:47.739 06:05:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@41 -- # break 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.740 06:05:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:47.998 06:05:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@41 -- # break 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:47.999 06:05:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@41 -- # break 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.258 06:05:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@41 -- # break 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.517 06:05:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@41 -- # break 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.776 06:05:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@41 -- # break 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@41 -- # break 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.035 06:05:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@41 -- # break 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.294 06:05:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@41 -- # break 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.553 06:05:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@41 -- # break 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.811 06:05:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@41 -- # break 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.069 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.070 06:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:50.070 06:05:20 -- bdev/nbd_common.sh@41 -- # break 00:12:50.070 06:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.070 06:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.070 06:05:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@41 -- # break 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.328 06:05:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@41 -- # break 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.587 06:05:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.846 06:05:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:50.846 06:05:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:50.846 06:05:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@65 -- # true 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@65 -- # count=0 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@104 -- # count=0 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@109 -- # return 0 00:12:51.136 06:05:21 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:51.136 malloc_lvol_verify 00:12:51.136 06:05:21 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:51.395 4028d103-dad3-42c8-b74b-fe0f4468019b 00:12:51.395 06:05:21 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:51.653 24442810-ca40-4244-ae17-bde8ec990273 00:12:51.653 06:05:22 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:51.912 /dev/nbd0 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:51.912 mke2fs 1.46.5 (30-Dec-2021) 00:12:51.912 00:12:51.912 Filesystem too small for a journal 00:12:51.912 Discarding device blocks: 0/1024 done 00:12:51.912 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:51.912 00:12:51.912 Allocating group tables: 0/1 done 00:12:51.912 Writing inode tables: 0/1 done 00:12:51.912 Writing superblocks and filesystem accounting information: 0/1 done 00:12:51.912 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@51 -- # local i 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.912 06:05:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@41 -- # break 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:52.170 06:05:22 -- bdev/nbd_common.sh@147 -- # return 0 00:12:52.170 06:05:22 -- bdev/blockdev.sh@324 -- # killprocess 109365 00:12:52.170 06:05:22 -- common/autotest_common.sh@926 -- # '[' -z 109365 ']' 00:12:52.170 06:05:22 -- common/autotest_common.sh@930 -- # kill -0 109365 00:12:52.170 06:05:22 -- common/autotest_common.sh@931 -- # uname 00:12:52.170 06:05:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:52.170 06:05:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 109365 00:12:52.170 06:05:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:52.170 06:05:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:52.170 06:05:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 109365' 00:12:52.170 killing process with pid 109365 00:12:52.170 06:05:22 -- common/autotest_common.sh@945 -- # kill 109365 00:12:52.170 06:05:22 -- common/autotest_common.sh@950 -- # wait 109365 00:12:54.703 ************************************ 00:12:54.703 END TEST bdev_nbd 00:12:54.703 ************************************ 00:12:54.703 06:05:25 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:54.703 00:12:54.703 real 0m26.757s 00:12:54.703 user 0m33.703s 00:12:54.703 sys 0m11.795s 00:12:54.703 06:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.703 06:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.703 06:05:25 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:54.703 06:05:25 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:54.703 06:05:25 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:54.703 06:05:25 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.703 06:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.703 ************************************ 00:12:54.703 START TEST bdev_fio 00:12:54.703 ************************************ 00:12:54.703 06:05:25 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:54.703 06:05:25 -- bdev/blockdev.sh@329 -- # local env_context 00:12:54.703 06:05:25 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:54.703 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:54.703 06:05:25 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:54.703 06:05:25 -- bdev/blockdev.sh@337 -- # echo '' 00:12:54.703 06:05:25 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:54.703 06:05:25 -- bdev/blockdev.sh@337 -- # env_context= 00:12:54.703 06:05:25 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.703 06:05:25 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:54.703 06:05:25 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:54.703 06:05:25 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:54.703 06:05:25 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:54.703 06:05:25 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:54.703 06:05:25 -- common/autotest_common.sh@1280 -- # cat 00:12:54.703 06:05:25 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1293 -- # cat 00:12:54.703 06:05:25 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:54.703 06:05:25 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:54.962 06:05:25 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:54.962 06:05:25 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:54.962 06:05:25 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:54.962 06:05:25 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:54.962 06:05:25 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:54.962 06:05:25 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:54.962 06:05:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:54.962 06:05:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.962 06:05:25 -- common/autotest_common.sh@10 -- # set +x 00:12:54.962 ************************************ 00:12:54.962 START TEST bdev_fio_rw_verify 00:12:54.962 ************************************ 00:12:54.962 06:05:25 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:54.962 06:05:25 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:54.962 06:05:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:54.962 06:05:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:54.963 06:05:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:54.963 06:05:25 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.963 06:05:25 -- common/autotest_common.sh@1320 -- # shift 00:12:54.963 06:05:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:54.963 06:05:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:54.963 06:05:25 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:54.963 06:05:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:54.963 06:05:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:54.963 06:05:25 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:54.963 06:05:25 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:54.963 06:05:25 -- common/autotest_common.sh@1326 -- # break 00:12:54.963 06:05:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:54.963 06:05:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:55.222 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:55.222 fio-3.35 00:12:55.222 Starting 16 threads 00:13:07.422 00:13:07.422 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=110556: Tue Jun 11 06:05:37 2024 00:13:07.422 read: IOPS=85.5k, BW=334MiB/s (350MB/s)(3340MiB/10004msec) 00:13:07.422 slat (nsec): min=1823, max=43984k, avg=31403.34, stdev=393994.50 00:13:07.422 clat (usec): min=7, max=47992, avg=269.96, stdev=1208.13 00:13:07.422 lat (usec): min=18, max=48010, avg=301.36, stdev=1270.18 00:13:07.422 clat percentiles (usec): 00:13:07.422 | 50.000th=[ 159], 99.000th=[ 750], 99.900th=[16319], 99.990th=[28181], 00:13:07.422 | 99.999th=[44303] 00:13:07.422 write: IOPS=135k, BW=526MiB/s (551MB/s)(5196MiB/9885msec); 0 zone resets 00:13:07.422 slat (usec): min=5, max=47944, avg=60.13, stdev=617.70 00:13:07.422 clat (usec): min=8, max=54224, avg=352.39, stdev=1455.34 00:13:07.422 lat (usec): min=34, max=54258, avg=412.52, stdev=1581.22 00:13:07.422 clat percentiles (usec): 00:13:07.422 | 50.000th=[ 198], 99.000th=[ 4490], 99.900th=[20055], 99.990th=[32113], 00:13:07.422 | 99.999th=[47973] 00:13:07.422 bw ( KiB/s): min=325416, max=828728, per=98.50%, avg=530138.47, stdev=8829.69, samples=304 00:13:07.422 iops : min=81354, max=207182, avg=132534.79, stdev=2207.43, samples=304 00:13:07.422 lat (usec) : 10=0.01%, 20=0.01%, 50=1.04%, 100=15.10%, 250=59.93% 00:13:07.422 lat (usec) : 500=20.26%, 750=2.15%, 1000=0.32% 00:13:07.422 lat (msec) : 2=0.14%, 4=0.12%, 10=0.26%, 20=0.57%, 50=0.08% 00:13:07.422 lat (msec) : 100=0.01% 00:13:07.422 cpu : usr=55.37%, sys=2.18%, ctx=292304, majf=2, minf=92199 00:13:07.422 IO depths : 1=11.0%, 2=23.4%, 4=52.3%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:07.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.422 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.422 issued rwts: total=855043,1330089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.422 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:07.422 00:13:07.422 Run status group 0 (all jobs): 00:13:07.422 READ: bw=334MiB/s (350MB/s), 334MiB/s-334MiB/s (350MB/s-350MB/s), io=3340MiB (3502MB), run=10004-10004msec 00:13:07.422 WRITE: bw=526MiB/s (551MB/s), 526MiB/s-526MiB/s (551MB/s-551MB/s), io=5196MiB (5448MB), run=9885-9885msec 00:13:09.962 ----------------------------------------------------- 00:13:09.962 Suppressions used: 00:13:09.962 count bytes template 00:13:09.962 16 140 /usr/src/fio/parse.c 00:13:09.962 10207 979872 /usr/src/fio/iolog.c 00:13:09.962 1 904 libcrypto.so 00:13:09.962 ----------------------------------------------------- 00:13:09.962 00:13:09.962 00:13:09.962 real 0m14.852s 00:13:09.962 user 1m35.152s 00:13:09.962 sys 0m4.554s 00:13:09.962 06:05:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.962 ************************************ 00:13:09.962 END TEST bdev_fio_rw_verify 00:13:09.962 ************************************ 00:13:09.962 06:05:40 -- common/autotest_common.sh@10 -- # set +x 00:13:09.962 06:05:40 -- bdev/blockdev.sh@348 -- # rm -f 00:13:09.962 06:05:40 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.962 06:05:40 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:09.962 06:05:40 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.962 06:05:40 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:09.962 06:05:40 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:09.962 06:05:40 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:09.962 06:05:40 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:09.962 06:05:40 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:09.962 06:05:40 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:09.962 06:05:40 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:09.962 06:05:40 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:09.962 06:05:40 -- common/autotest_common.sh@1280 -- # cat 00:13:09.962 06:05:40 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:09.962 06:05:40 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:09.962 06:05:40 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:09.962 06:05:40 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:09.963 06:05:40 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3faa221f-ff43-4e73-ac35-693f2d58b974"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3faa221f-ff43-4e73-ac35-693f2d58b974",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "62487afa-6a6e-5b26-b001-e361a0a211cd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "62487afa-6a6e-5b26-b001-e361a0a211cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b7d14b28-8336-5221-9f98-4b2177966fef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b7d14b28-8336-5221-9f98-4b2177966fef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "df6b0705-d8bf-586e-8ba3-d697629d7457"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "df6b0705-d8bf-586e-8ba3-d697629d7457",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "0f74f7c8-477a-56da-97bc-0b63536e343a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f74f7c8-477a-56da-97bc-0b63536e343a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "e2e0af42-b4cd-5cc5-84a2-dc50ea7fc4f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e2e0af42-b4cd-5cc5-84a2-dc50ea7fc4f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "52b756d4-7c55-5061-9ab9-5fa4cd528512"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "52b756d4-7c55-5061-9ab9-5fa4cd528512",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a386dd92-3e0a-554a-891a-67bb6c61466b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a386dd92-3e0a-554a-891a-67bb6c61466b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "f376bbc3-a869-589d-9d90-10ad9fad7683"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f376bbc3-a869-589d-9d90-10ad9fad7683",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a2fb79fd-2cd0-55e2-a40b-ae9515452241"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a2fb79fd-2cd0-55e2-a40b-ae9515452241",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dd5763d4-84f6-551e-b165-9c74f7134cb6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dd5763d4-84f6-551e-b165-9c74f7134cb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b7c8f4b7-2765-5b09-a737-bfad49ed4c15"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b7c8f4b7-2765-5b09-a737-bfad49ed4c15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "bd112e3d-8639-4b74-85df-a456d10c04f8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bd112e3d-8639-4b74-85df-a456d10c04f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bd112e3d-8639-4b74-85df-a456d10c04f8",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "69370a72-131e-4f8f-9d8b-c8226a7f9d8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "85ac982e-eac3-4cca-a522-ce08fbf7e183",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "70254839-f488-414c-85d9-7e9fc9c5c699"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "70254839-f488-414c-85d9-7e9fc9c5c699",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "70254839-f488-414c-85d9-7e9fc9c5c699",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "2b90cf78-8401-438d-bdcc-603601cca30e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d071178d-ad4e-4a6a-a5df-6da8abd53a9f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "914e0fbb-862e-4918-b383-93c80ee74f32"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "914e0fbb-862e-4918-b383-93c80ee74f32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "914e0fbb-862e-4918-b383-93c80ee74f32",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af09a647-3e16-4dd6-9431-bb02731d73db",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c98a4a12-b5a8-495f-bada-079a8c278bc5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4faf661e-3bae-41a2-bb11-f50d22355afd"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4faf661e-3bae-41a2-bb11-f50d22355afd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:09.963 06:05:40 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:09.963 Malloc1p0 00:13:09.963 Malloc1p1 00:13:09.963 Malloc2p0 00:13:09.963 Malloc2p1 00:13:09.963 Malloc2p2 00:13:09.963 Malloc2p3 00:13:09.963 Malloc2p4 00:13:09.963 Malloc2p5 00:13:09.963 Malloc2p6 00:13:09.963 Malloc2p7 00:13:09.963 TestPT 00:13:09.963 raid0 00:13:09.963 concat0 ]] 00:13:09.963 06:05:40 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3faa221f-ff43-4e73-ac35-693f2d58b974"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3faa221f-ff43-4e73-ac35-693f2d58b974",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "62487afa-6a6e-5b26-b001-e361a0a211cd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "62487afa-6a6e-5b26-b001-e361a0a211cd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "b7d14b28-8336-5221-9f98-4b2177966fef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "b7d14b28-8336-5221-9f98-4b2177966fef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "df6b0705-d8bf-586e-8ba3-d697629d7457"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "df6b0705-d8bf-586e-8ba3-d697629d7457",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "0f74f7c8-477a-56da-97bc-0b63536e343a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f74f7c8-477a-56da-97bc-0b63536e343a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "e2e0af42-b4cd-5cc5-84a2-dc50ea7fc4f2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e2e0af42-b4cd-5cc5-84a2-dc50ea7fc4f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "52b756d4-7c55-5061-9ab9-5fa4cd528512"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "52b756d4-7c55-5061-9ab9-5fa4cd528512",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "a386dd92-3e0a-554a-891a-67bb6c61466b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a386dd92-3e0a-554a-891a-67bb6c61466b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "f376bbc3-a869-589d-9d90-10ad9fad7683"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f376bbc3-a869-589d-9d90-10ad9fad7683",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a2fb79fd-2cd0-55e2-a40b-ae9515452241"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a2fb79fd-2cd0-55e2-a40b-ae9515452241",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "dd5763d4-84f6-551e-b165-9c74f7134cb6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "dd5763d4-84f6-551e-b165-9c74f7134cb6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "b7c8f4b7-2765-5b09-a737-bfad49ed4c15"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b7c8f4b7-2765-5b09-a737-bfad49ed4c15",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "bd112e3d-8639-4b74-85df-a456d10c04f8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bd112e3d-8639-4b74-85df-a456d10c04f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bd112e3d-8639-4b74-85df-a456d10c04f8",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "69370a72-131e-4f8f-9d8b-c8226a7f9d8c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "85ac982e-eac3-4cca-a522-ce08fbf7e183",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "70254839-f488-414c-85d9-7e9fc9c5c699"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "70254839-f488-414c-85d9-7e9fc9c5c699",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "70254839-f488-414c-85d9-7e9fc9c5c699",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "2b90cf78-8401-438d-bdcc-603601cca30e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "d071178d-ad4e-4a6a-a5df-6da8abd53a9f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "914e0fbb-862e-4918-b383-93c80ee74f32"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "914e0fbb-862e-4918-b383-93c80ee74f32",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "914e0fbb-862e-4918-b383-93c80ee74f32",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "af09a647-3e16-4dd6-9431-bb02731d73db",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "c98a4a12-b5a8-495f-bada-079a8c278bc5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "4faf661e-3bae-41a2-bb11-f50d22355afd"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "4faf661e-3bae-41a2-bb11-f50d22355afd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:09.964 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.964 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:09.964 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:09.965 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.965 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:09.965 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:09.965 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.965 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:09.965 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:09.965 06:05:40 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:09.965 06:05:40 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:09.965 06:05:40 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:09.965 06:05:40 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.965 06:05:40 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:09.965 06:05:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.965 06:05:40 -- common/autotest_common.sh@10 -- # set +x 00:13:09.965 ************************************ 00:13:09.965 START TEST bdev_fio_trim 00:13:09.965 ************************************ 00:13:09.965 06:05:40 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.965 06:05:40 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:09.965 06:05:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:09.965 06:05:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.965 06:05:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:09.965 06:05:40 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.965 06:05:40 -- common/autotest_common.sh@1320 -- # shift 00:13:09.965 06:05:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:09.965 06:05:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.965 06:05:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:09.965 06:05:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.965 06:05:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:09.965 06:05:40 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:13:09.965 06:05:40 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:13:09.965 06:05:40 -- common/autotest_common.sh@1326 -- # break 00:13:09.965 06:05:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.965 06:05:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:10.225 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:10.225 fio-3.35 00:13:10.225 Starting 14 threads 00:13:22.456 00:13:22.456 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=110790: Tue Jun 11 06:05:52 2024 00:13:22.456 write: IOPS=171k, BW=667MiB/s (699MB/s)(6673MiB/10011msec); 0 zone resets 00:13:22.456 slat (usec): min=2, max=24058, avg=27.30, stdev=331.61 00:13:22.456 clat (usec): min=13, max=37439, avg=213.89, stdev=985.37 00:13:22.456 lat (usec): min=27, max=37461, avg=241.19, stdev=1038.99 00:13:22.456 clat percentiles (usec): 00:13:22.456 | 50.000th=[ 139], 99.000th=[ 502], 99.900th=[16188], 99.990th=[19006], 00:13:22.456 | 99.999th=[24249] 00:13:22.456 bw ( KiB/s): min=493232, max=966480, per=99.90%, avg=681896.61, stdev=12617.17, samples=267 00:13:22.456 iops : min=123308, max=241622, avg=170474.01, stdev=3154.30, samples=267 00:13:22.456 trim: IOPS=171k, BW=667MiB/s (699MB/s)(6673MiB/10011msec); 0 zone resets 00:13:22.456 slat (usec): min=4, max=24040, avg=21.66, stdev=307.55 00:13:22.456 clat (usec): min=4, max=37461, avg=219.88, stdev=958.83 00:13:22.456 lat (usec): min=13, max=37475, avg=241.54, stdev=1006.64 00:13:22.456 clat percentiles (usec): 00:13:22.456 | 50.000th=[ 155], 99.000th=[ 297], 99.900th=[16188], 99.990th=[17171], 00:13:22.456 | 99.999th=[24249] 00:13:22.456 bw ( KiB/s): min=493168, max=966424, per=99.90%, avg=681896.24, stdev=12617.12, samples=267 00:13:22.456 iops : min=123292, max=241608, avg=170474.11, stdev=3154.30, samples=267 00:13:22.456 lat (usec) : 10=0.11%, 20=0.27%, 50=1.04%, 100=16.30%, 250=78.42% 00:13:22.456 lat (usec) : 500=3.10%, 750=0.27%, 1000=0.01% 00:13:22.456 lat (msec) : 2=0.01%, 4=0.02%, 10=0.06%, 20=0.39%, 50=0.01% 00:13:22.456 cpu : usr=69.08%, sys=0.46%, ctx=173395, majf=0, minf=823 00:13:22.456 IO depths : 1=12.3%, 2=24.7%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:22.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.456 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.456 issued rwts: total=0,1708260,1708261,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.456 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:22.456 00:13:22.456 Run status group 0 (all jobs): 00:13:22.456 WRITE: bw=667MiB/s (699MB/s), 667MiB/s-667MiB/s (699MB/s-699MB/s), io=6673MiB (6997MB), run=10011-10011msec 00:13:22.456 TRIM: bw=667MiB/s (699MB/s), 667MiB/s-667MiB/s (699MB/s-699MB/s), io=6673MiB (6997MB), run=10011-10011msec 00:13:24.360 ----------------------------------------------------- 00:13:24.360 Suppressions used: 00:13:24.360 count bytes template 00:13:24.360 14 129 /usr/src/fio/parse.c 00:13:24.360 1 904 libcrypto.so 00:13:24.360 ----------------------------------------------------- 00:13:24.360 00:13:24.360 00:13:24.360 real 0m14.400s 00:13:24.360 user 1m42.396s 00:13:24.360 sys 0m1.681s 00:13:24.360 06:05:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.360 ************************************ 00:13:24.360 END TEST bdev_fio_trim 00:13:24.360 ************************************ 00:13:24.360 06:05:54 -- common/autotest_common.sh@10 -- # set +x 00:13:24.360 06:05:54 -- bdev/blockdev.sh@366 -- # rm -f 00:13:24.360 06:05:54 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:24.360 /home/vagrant/spdk_repo/spdk 00:13:24.360 06:05:54 -- bdev/blockdev.sh@368 -- # popd 00:13:24.360 06:05:54 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:24.360 00:13:24.360 real 0m29.617s 00:13:24.361 user 3m17.768s 00:13:24.361 sys 0m6.355s 00:13:24.361 06:05:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:24.361 06:05:54 -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 ************************************ 00:13:24.361 END TEST bdev_fio 00:13:24.361 ************************************ 00:13:24.361 06:05:54 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:24.361 06:05:54 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:24.361 06:05:54 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:24.361 06:05:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:24.361 06:05:54 -- common/autotest_common.sh@10 -- # set +x 00:13:24.361 ************************************ 00:13:24.361 START TEST bdev_verify 00:13:24.361 ************************************ 00:13:24.361 06:05:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:24.620 [2024-06-11 06:05:55.085607] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:24.620 [2024-06-11 06:05:55.085826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110984 ] 00:13:24.879 [2024-06-11 06:05:55.278789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.138 [2024-06-11 06:05:55.555936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.138 [2024-06-11 06:05:55.555936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.397 [2024-06-11 06:05:55.994940] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:25.397 [2024-06-11 06:05:55.995299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:25.398 [2024-06-11 06:05:56.002911] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:25.398 [2024-06-11 06:05:56.003091] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:25.398 [2024-06-11 06:05:56.010932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:25.398 [2024-06-11 06:05:56.011074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:25.398 [2024-06-11 06:05:56.011187] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:25.657 [2024-06-11 06:05:56.227843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:25.657 [2024-06-11 06:05:56.228444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.657 [2024-06-11 06:05:56.228697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:25.657 [2024-06-11 06:05:56.228859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.657 [2024-06-11 06:05:56.231687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.657 [2024-06-11 06:05:56.231827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:26.224 Running I/O for 5 seconds... 00:13:31.494 00:13:31.494 Latency(us) 00:13:31.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.494 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x1000 00:13:31.494 Malloc0 : 5.16 1704.96 6.66 0.00 0.00 74454.43 2028.50 218702.99 00:13:31.494 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x1000 length 0x1000 00:13:31.494 Malloc0 : 5.16 1679.41 6.56 0.00 0.00 75553.20 1599.39 279620.27 00:13:31.494 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x800 00:13:31.494 Malloc1p0 : 5.16 1182.10 4.62 0.00 0.00 107267.31 4181.82 135815.56 00:13:31.494 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x800 length 0x800 00:13:31.494 Malloc1p0 : 5.16 1182.38 4.62 0.00 0.00 107276.27 4181.82 135815.56 00:13:31.494 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x800 00:13:31.494 Malloc1p1 : 5.16 1181.68 4.62 0.00 0.00 107115.02 4712.35 130822.34 00:13:31.494 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x800 length 0x800 00:13:31.494 Malloc1p1 : 5.16 1181.94 4.62 0.00 0.00 107121.95 4712.35 130822.34 00:13:31.494 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p0 : 5.17 1181.27 4.61 0.00 0.00 106948.68 4369.07 124830.48 00:13:31.494 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p0 : 5.16 1181.51 4.62 0.00 0.00 106959.65 4400.27 124830.48 00:13:31.494 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p1 : 5.17 1180.84 4.61 0.00 0.00 106797.03 4462.69 120336.58 00:13:31.494 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p1 : 5.17 1181.07 4.61 0.00 0.00 106797.24 4462.69 120336.58 00:13:31.494 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p2 : 5.17 1180.41 4.61 0.00 0.00 106649.72 3869.74 115842.68 00:13:31.494 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p2 : 5.17 1180.64 4.61 0.00 0.00 106649.87 3900.95 115842.68 00:13:31.494 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p3 : 5.17 1180.00 4.61 0.00 0.00 106510.23 3557.67 112347.43 00:13:31.494 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p3 : 5.17 1180.23 4.61 0.00 0.00 106517.12 3557.67 112347.43 00:13:31.494 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p4 : 5.19 1191.45 4.65 0.00 0.00 105741.72 3760.52 108352.85 00:13:31.494 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p4 : 5.17 1179.81 4.61 0.00 0.00 106392.71 3760.52 108352.85 00:13:31.494 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p5 : 5.19 1190.98 4.65 0.00 0.00 105605.34 4088.20 103858.96 00:13:31.494 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p5 : 5.19 1191.58 4.65 0.00 0.00 105598.08 4088.20 103359.63 00:13:31.494 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p6 : 5.19 1190.51 4.65 0.00 0.00 105482.43 3963.37 101362.35 00:13:31.494 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p6 : 5.19 1191.28 4.65 0.00 0.00 105433.58 3932.16 101861.67 00:13:31.494 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x200 00:13:31.494 Malloc2p7 : 5.19 1190.08 4.65 0.00 0.00 105341.06 3479.65 102360.99 00:13:31.494 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x200 length 0x200 00:13:31.494 Malloc2p7 : 5.19 1190.82 4.65 0.00 0.00 105315.34 3432.84 103359.63 00:13:31.494 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x1000 00:13:31.494 TestPT : 5.20 1175.74 4.59 0.00 0.00 106502.78 9299.87 104358.28 00:13:31.494 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x1000 length 0x1000 00:13:31.494 TestPT : 5.19 1161.05 4.54 0.00 0.00 107873.94 9112.62 154789.79 00:13:31.494 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x2000 00:13:31.494 raid0 : 5.20 1189.13 4.65 0.00 0.00 105041.66 3620.08 105856.24 00:13:31.494 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x2000 length 0x2000 00:13:31.494 raid0 : 5.20 1189.84 4.65 0.00 0.00 105025.14 3620.08 106355.57 00:13:31.494 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x2000 00:13:31.494 concat0 : 5.20 1188.67 4.64 0.00 0.00 104917.46 3713.71 107354.21 00:13:31.494 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x2000 length 0x2000 00:13:31.494 concat0 : 5.20 1189.42 4.65 0.00 0.00 104891.58 3807.33 107853.53 00:13:31.494 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x1000 00:13:31.494 raid1 : 5.20 1188.12 4.64 0.00 0.00 104789.73 4119.41 108852.18 00:13:31.494 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x1000 length 0x1000 00:13:31.494 raid1 : 5.20 1188.94 4.64 0.00 0.00 104782.34 4119.41 109351.50 00:13:31.494 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x0 length 0x4e2 00:13:31.494 AIO0 : 5.20 1187.36 4.64 0.00 0.00 104617.11 7302.58 109351.50 00:13:31.494 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:31.494 Verification LBA range: start 0x4e2 length 0x4e2 00:13:31.494 AIO0 : 5.20 1187.94 4.64 0.00 0.00 104590.54 7240.17 109850.82 00:13:31.494 =================================================================================================================== 00:13:31.494 Total : 38921.15 152.04 0.00 0.00 103326.42 1599.39 279620.27 00:13:34.022 00:13:34.022 real 0m9.660s 00:13:34.022 user 0m16.532s 00:13:34.022 sys 0m0.722s 00:13:34.022 06:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.022 06:06:04 -- common/autotest_common.sh@10 -- # set +x 00:13:34.022 ************************************ 00:13:34.022 END TEST bdev_verify 00:13:34.022 ************************************ 00:13:34.280 06:06:04 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:34.280 06:06:04 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:34.280 06:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.280 06:06:04 -- common/autotest_common.sh@10 -- # set +x 00:13:34.280 ************************************ 00:13:34.280 START TEST bdev_verify_big_io 00:13:34.280 ************************************ 00:13:34.280 06:06:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:34.280 [2024-06-11 06:06:04.801052] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:34.280 [2024-06-11 06:06:04.801273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111116 ] 00:13:34.538 [2024-06-11 06:06:04.996338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.796 [2024-06-11 06:06:05.253576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.796 [2024-06-11 06:06:05.253576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.053 [2024-06-11 06:06:05.696957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:35.053 [2024-06-11 06:06:05.697324] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:35.311 [2024-06-11 06:06:05.704938] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:35.311 [2024-06-11 06:06:05.705150] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:35.311 [2024-06-11 06:06:05.712949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:35.311 [2024-06-11 06:06:05.713105] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:35.311 [2024-06-11 06:06:05.713222] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:35.311 [2024-06-11 06:06:05.950298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:35.311 [2024-06-11 06:06:05.951048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.311 [2024-06-11 06:06:05.951230] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:35.311 [2024-06-11 06:06:05.951330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.311 [2024-06-11 06:06:05.954338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.311 [2024-06-11 06:06:05.954543] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:35.877 [2024-06-11 06:06:06.372673] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.377063] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.382025] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.386765] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.390669] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.395356] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.399070] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.403856] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.407800] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.412520] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.416567] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.421275] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.425143] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.429820] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.434595] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:35.877 [2024-06-11 06:06:06.438501] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:36.135 [2024-06-11 06:06:06.538651] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:36.135 [2024-06-11 06:06:06.546458] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:36.135 Running I/O for 5 seconds... 00:13:42.689 00:13:42.689 Latency(us) 00:13:42.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.689 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x100 00:13:42.689 Malloc0 : 5.43 417.43 26.09 0.00 0.00 297993.93 19848.05 962692.63 00:13:42.689 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x100 length 0x100 00:13:42.689 Malloc0 : 5.45 392.46 24.53 0.00 0.00 316305.18 18974.23 1046578.71 00:13:42.689 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x80 00:13:42.689 Malloc1p0 : 5.52 309.95 19.37 0.00 0.00 392818.96 36450.50 866822.83 00:13:42.689 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x80 length 0x80 00:13:42.689 Malloc1p0 : 5.55 225.76 14.11 0.00 0.00 539580.43 39945.75 946714.33 00:13:42.689 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x80 00:13:42.689 Malloc1p1 : 5.66 146.53 9.16 0.00 0.00 829076.74 36200.84 1765602.26 00:13:42.689 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x80 length 0x80 00:13:42.689 Malloc1p1 : 5.76 138.27 8.64 0.00 0.00 872482.85 39446.43 1829515.46 00:13:42.689 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p0 : 5.52 81.65 5.10 0.00 0.00 370409.60 6616.02 551251.38 00:13:42.689 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p0 : 5.56 77.21 4.83 0.00 0.00 389319.27 5430.13 591197.14 00:13:42.689 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p1 : 5.52 81.64 5.10 0.00 0.00 369144.17 5867.03 539267.66 00:13:42.689 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p1 : 5.56 77.19 4.82 0.00 0.00 387844.47 6834.47 575218.83 00:13:42.689 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p2 : 5.53 81.62 5.10 0.00 0.00 367922.19 7177.75 527283.93 00:13:42.689 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p2 : 5.56 77.17 4.82 0.00 0.00 386394.59 6865.68 563235.11 00:13:42.689 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p3 : 5.53 81.61 5.10 0.00 0.00 366692.15 6928.09 515300.21 00:13:42.689 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p3 : 5.56 77.16 4.82 0.00 0.00 384928.99 6865.68 551251.38 00:13:42.689 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p4 : 5.53 81.59 5.10 0.00 0.00 365392.23 6896.88 505313.77 00:13:42.689 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p4 : 5.61 80.42 5.03 0.00 0.00 370826.23 7989.15 539267.66 00:13:42.689 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p5 : 5.53 81.57 5.10 0.00 0.00 364068.88 6834.47 493330.04 00:13:42.689 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p5 : 5.61 80.40 5.02 0.00 0.00 369357.05 8051.57 523289.36 00:13:42.689 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p6 : 5.57 85.13 5.32 0.00 0.00 350121.08 6553.60 479349.03 00:13:42.689 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p6 : 5.61 80.38 5.02 0.00 0.00 367818.95 8363.64 509308.34 00:13:42.689 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x20 00:13:42.689 Malloc2p7 : 5.57 85.11 5.32 0.00 0.00 348840.38 6116.69 467365.30 00:13:42.689 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x20 length 0x20 00:13:42.689 Malloc2p7 : 5.61 80.37 5.02 0.00 0.00 366258.22 7427.41 493330.04 00:13:42.689 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x100 00:13:42.689 TestPT : 5.74 145.39 9.09 0.00 0.00 801936.12 44938.97 1765602.26 00:13:42.689 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x100 length 0x100 00:13:42.689 TestPT : 5.70 139.64 8.73 0.00 0.00 832999.88 46936.26 1869461.21 00:13:42.689 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x200 00:13:42.689 raid0 : 5.74 149.55 9.35 0.00 0.00 771843.86 35951.18 1741634.80 00:13:42.689 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x200 length 0x200 00:13:42.689 raid0 : 5.78 143.34 8.96 0.00 0.00 798514.36 40944.40 1805548.01 00:13:42.689 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x200 00:13:42.689 concat0 : 5.74 154.91 9.68 0.00 0.00 736735.92 26214.40 1741634.80 00:13:42.689 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x200 length 0x200 00:13:42.689 concat0 : 5.79 153.63 9.60 0.00 0.00 738840.98 38697.45 1813537.16 00:13:42.689 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x0 length 0x100 00:13:42.689 raid1 : 5.69 172.33 10.77 0.00 0.00 659023.33 20097.71 1749623.95 00:13:42.689 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:42.689 Verification LBA range: start 0x100 length 0x100 00:13:42.690 raid1 : 5.78 165.38 10.34 0.00 0.00 678158.15 18724.57 1829515.46 00:13:42.690 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:42.690 Verification LBA range: start 0x0 length 0x4e 00:13:42.690 AIO0 : 5.74 168.11 10.51 0.00 0.00 406342.39 4119.41 1018616.69 00:13:42.690 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:42.690 Verification LBA range: start 0x4e length 0x4e 00:13:42.690 AIO0 : 5.79 171.29 10.71 0.00 0.00 394026.58 2543.42 1070546.16 00:13:42.690 =================================================================================================================== 00:13:42.690 Total : 4484.19 280.26 0.00 0.00 508082.49 2543.42 1869461.21 00:13:45.213 00:13:45.213 real 0m10.736s 00:13:45.213 user 0m19.374s 00:13:45.213 sys 0m0.696s 00:13:45.213 06:06:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.213 06:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.213 ************************************ 00:13:45.213 END TEST bdev_verify_big_io 00:13:45.213 ************************************ 00:13:45.213 06:06:15 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.213 06:06:15 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:45.213 06:06:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:45.213 06:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.213 ************************************ 00:13:45.213 START TEST bdev_write_zeroes 00:13:45.213 ************************************ 00:13:45.213 06:06:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:45.213 [2024-06-11 06:06:15.603514] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:45.213 [2024-06-11 06:06:15.603741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111272 ] 00:13:45.213 [2024-06-11 06:06:15.784341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.470 [2024-06-11 06:06:16.028079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.036 [2024-06-11 06:06:16.482675] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:46.036 [2024-06-11 06:06:16.483049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:46.036 [2024-06-11 06:06:16.490654] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:46.036 [2024-06-11 06:06:16.490839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:46.036 [2024-06-11 06:06:16.498675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:46.036 [2024-06-11 06:06:16.498826] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:46.036 [2024-06-11 06:06:16.498930] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:46.294 [2024-06-11 06:06:16.742349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:46.294 [2024-06-11 06:06:16.743019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.294 [2024-06-11 06:06:16.743181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:46.294 [2024-06-11 06:06:16.743296] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.294 [2024-06-11 06:06:16.746091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.294 [2024-06-11 06:06:16.746260] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:46.552 Running I/O for 1 seconds... 00:13:47.928 00:13:47.928 Latency(us) 00:13:47.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.928 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc0 : 1.03 5709.16 22.30 0.00 0.00 22406.21 639.76 36700.16 00:13:47.928 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc1p0 : 1.03 5702.27 22.27 0.00 0.00 22402.81 944.03 35701.52 00:13:47.928 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc1p1 : 1.03 5695.67 22.25 0.00 0.00 22380.98 838.70 34952.53 00:13:47.928 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p0 : 1.03 5689.18 22.22 0.00 0.00 22370.50 819.20 34453.21 00:13:47.928 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p1 : 1.04 5682.67 22.20 0.00 0.00 22355.60 830.90 33704.23 00:13:47.928 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p2 : 1.04 5676.24 22.17 0.00 0.00 22334.98 830.90 32955.25 00:13:47.928 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p3 : 1.04 5669.77 22.15 0.00 0.00 22317.98 827.00 32206.26 00:13:47.928 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p4 : 1.04 5663.41 22.12 0.00 0.00 22302.60 819.20 31582.11 00:13:47.928 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p5 : 1.04 5655.42 22.09 0.00 0.00 22295.31 823.10 30833.13 00:13:47.928 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p6 : 1.04 5648.43 22.06 0.00 0.00 22280.85 823.10 30208.98 00:13:47.928 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 Malloc2p7 : 1.04 5642.05 22.04 0.00 0.00 22267.14 803.60 29584.82 00:13:47.928 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 TestPT : 1.04 5635.50 22.01 0.00 0.00 22245.71 862.11 28835.84 00:13:47.928 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 raid0 : 1.05 5703.97 22.28 0.00 0.00 21929.86 1279.51 27837.20 00:13:47.928 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 concat0 : 1.06 5696.74 22.25 0.00 0.00 21891.37 1310.72 26713.72 00:13:47.928 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 raid1 : 1.06 5687.87 22.22 0.00 0.00 21847.38 2137.72 24841.26 00:13:47.928 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:47.928 AIO0 : 1.06 5670.19 22.15 0.00 0.00 21824.94 1271.71 23592.96 00:13:47.928 =================================================================================================================== 00:13:47.928 Total : 90828.53 354.80 0.00 0.00 22214.09 639.76 36700.16 00:13:50.460 00:13:50.460 real 0m5.409s 00:13:50.460 user 0m4.612s 00:13:50.460 sys 0m0.567s 00:13:50.460 06:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.460 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.460 ************************************ 00:13:50.460 END TEST bdev_write_zeroes 00:13:50.460 ************************************ 00:13:50.460 06:06:20 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.460 06:06:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:50.460 06:06:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.460 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.460 ************************************ 00:13:50.460 START TEST bdev_json_nonenclosed 00:13:50.460 ************************************ 00:13:50.460 06:06:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:50.460 [2024-06-11 06:06:21.086470] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:50.460 [2024-06-11 06:06:21.086705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111358 ] 00:13:50.719 [2024-06-11 06:06:21.269734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.977 [2024-06-11 06:06:21.528845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.977 [2024-06-11 06:06:21.529382] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:50.977 [2024-06-11 06:06:21.529531] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.544 00:13:51.544 real 0m1.046s 00:13:51.544 user 0m0.779s 00:13:51.544 sys 0m0.167s 00:13:51.544 06:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.544 ************************************ 00:13:51.544 END TEST bdev_json_nonenclosed 00:13:51.544 06:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:51.544 ************************************ 00:13:51.544 06:06:22 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.544 06:06:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:51.544 06:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:51.544 06:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:51.544 ************************************ 00:13:51.544 START TEST bdev_json_nonarray 00:13:51.544 ************************************ 00:13:51.544 06:06:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.802 [2024-06-11 06:06:22.203596] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:51.802 [2024-06-11 06:06:22.203830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111396 ] 00:13:51.802 [2024-06-11 06:06:22.386264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.059 [2024-06-11 06:06:22.633490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.059 [2024-06-11 06:06:22.634003] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:52.059 [2024-06-11 06:06:22.634186] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.625 00:13:52.625 real 0m1.026s 00:13:52.625 user 0m0.732s 00:13:52.625 sys 0m0.193s 00:13:52.625 06:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.625 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:13:52.625 ************************************ 00:13:52.625 END TEST bdev_json_nonarray 00:13:52.625 ************************************ 00:13:52.625 06:06:23 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:52.625 06:06:23 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:52.625 06:06:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:52.625 06:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.625 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:13:52.625 ************************************ 00:13:52.625 START TEST bdev_qos 00:13:52.625 ************************************ 00:13:52.625 06:06:23 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:52.625 06:06:23 -- bdev/blockdev.sh@444 -- # QOS_PID=111435 00:13:52.625 06:06:23 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 111435' 00:13:52.625 Process qos testing pid: 111435 00:13:52.625 06:06:23 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:52.625 06:06:23 -- bdev/blockdev.sh@447 -- # waitforlisten 111435 00:13:52.625 06:06:23 -- common/autotest_common.sh@819 -- # '[' -z 111435 ']' 00:13:52.625 06:06:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.625 06:06:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:52.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.625 06:06:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.625 06:06:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:52.625 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:13:52.625 06:06:23 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:52.883 [2024-06-11 06:06:23.293493] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:13:52.883 [2024-06-11 06:06:23.294018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111435 ] 00:13:52.883 [2024-06-11 06:06:23.482459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.499 [2024-06-11 06:06:23.791516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.757 06:06:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:53.757 06:06:24 -- common/autotest_common.sh@852 -- # return 0 00:13:53.757 06:06:24 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:53.757 06:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.757 06:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.015 Malloc_0 00:13:54.015 06:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.015 06:06:24 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:54.015 06:06:24 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:54.015 06:06:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.015 06:06:24 -- common/autotest_common.sh@889 -- # local i 00:13:54.015 06:06:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.015 06:06:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.015 06:06:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:54.015 06:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.015 06:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.015 06:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.015 06:06:24 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:54.015 06:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.015 06:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.015 [ 00:13:54.015 { 00:13:54.015 "name": "Malloc_0", 00:13:54.015 "aliases": [ 00:13:54.015 "98e2dd2b-c3e4-4529-8aae-b642d67b768d" 00:13:54.015 ], 00:13:54.015 "product_name": "Malloc disk", 00:13:54.015 "block_size": 512, 00:13:54.015 "num_blocks": 262144, 00:13:54.015 "uuid": "98e2dd2b-c3e4-4529-8aae-b642d67b768d", 00:13:54.015 "assigned_rate_limits": { 00:13:54.015 "rw_ios_per_sec": 0, 00:13:54.015 "rw_mbytes_per_sec": 0, 00:13:54.015 "r_mbytes_per_sec": 0, 00:13:54.015 "w_mbytes_per_sec": 0 00:13:54.015 }, 00:13:54.015 "claimed": false, 00:13:54.015 "zoned": false, 00:13:54.015 "supported_io_types": { 00:13:54.015 "read": true, 00:13:54.015 "write": true, 00:13:54.015 "unmap": true, 00:13:54.015 "write_zeroes": true, 00:13:54.015 "flush": true, 00:13:54.015 "reset": true, 00:13:54.015 "compare": false, 00:13:54.015 "compare_and_write": false, 00:13:54.015 "abort": true, 00:13:54.015 "nvme_admin": false, 00:13:54.015 "nvme_io": false 00:13:54.015 }, 00:13:54.015 "memory_domains": [ 00:13:54.016 { 00:13:54.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.016 "dma_device_type": 2 00:13:54.016 } 00:13:54.016 ], 00:13:54.016 "driver_specific": {} 00:13:54.016 } 00:13:54.016 ] 00:13:54.016 06:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.016 06:06:24 -- common/autotest_common.sh@895 -- # return 0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:54.016 06:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.016 06:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.016 Null_1 00:13:54.016 06:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.016 06:06:24 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:54.016 06:06:24 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:54.016 06:06:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:54.016 06:06:24 -- common/autotest_common.sh@889 -- # local i 00:13:54.016 06:06:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:54.016 06:06:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:54.016 06:06:24 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:54.016 06:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.016 06:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.016 06:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.016 06:06:24 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:54.016 06:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.016 06:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.016 [ 00:13:54.016 { 00:13:54.016 "name": "Null_1", 00:13:54.016 "aliases": [ 00:13:54.016 "f7b43fbd-4f2e-4c07-bf44-8a8f9098f725" 00:13:54.016 ], 00:13:54.016 "product_name": "Null disk", 00:13:54.016 "block_size": 512, 00:13:54.016 "num_blocks": 262144, 00:13:54.016 "uuid": "f7b43fbd-4f2e-4c07-bf44-8a8f9098f725", 00:13:54.016 "assigned_rate_limits": { 00:13:54.016 "rw_ios_per_sec": 0, 00:13:54.016 "rw_mbytes_per_sec": 0, 00:13:54.016 "r_mbytes_per_sec": 0, 00:13:54.016 "w_mbytes_per_sec": 0 00:13:54.016 }, 00:13:54.016 "claimed": false, 00:13:54.016 "zoned": false, 00:13:54.016 "supported_io_types": { 00:13:54.016 "read": true, 00:13:54.016 "write": true, 00:13:54.016 "unmap": false, 00:13:54.016 "write_zeroes": true, 00:13:54.016 "flush": false, 00:13:54.016 "reset": true, 00:13:54.016 "compare": false, 00:13:54.016 "compare_and_write": false, 00:13:54.016 "abort": true, 00:13:54.016 "nvme_admin": false, 00:13:54.016 "nvme_io": false 00:13:54.016 }, 00:13:54.016 "driver_specific": {} 00:13:54.016 } 00:13:54.016 ] 00:13:54.016 06:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.016 06:06:24 -- common/autotest_common.sh@895 -- # return 0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:54.016 06:06:24 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:54.016 06:06:24 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:54.016 06:06:24 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:54.016 06:06:24 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:54.016 06:06:24 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:54.016 06:06:24 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:54.016 06:06:24 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:54.016 06:06:24 -- bdev/blockdev.sh@376 -- # tail -1 00:13:54.016 Running I/O for 60 seconds... 00:13:59.276 06:06:29 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 83234.09 332936.37 0.00 0.00 337920.00 0.00 0.00 ' 00:13:59.276 06:06:29 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:59.276 06:06:29 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:59.276 06:06:29 -- bdev/blockdev.sh@378 -- # iostat_result=83234.09 00:13:59.276 06:06:29 -- bdev/blockdev.sh@383 -- # echo 83234 00:13:59.276 06:06:29 -- bdev/blockdev.sh@414 -- # io_result=83234 00:13:59.276 06:06:29 -- bdev/blockdev.sh@416 -- # iops_limit=20000 00:13:59.276 06:06:29 -- bdev/blockdev.sh@417 -- # '[' 20000 -gt 1000 ']' 00:13:59.276 06:06:29 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:13:59.276 06:06:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:59.276 06:06:29 -- common/autotest_common.sh@10 -- # set +x 00:13:59.276 06:06:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:59.276 06:06:29 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:13:59.276 06:06:29 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:59.276 06:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.276 06:06:29 -- common/autotest_common.sh@10 -- # set +x 00:13:59.276 ************************************ 00:13:59.276 START TEST bdev_qos_iops 00:13:59.276 ************************************ 00:13:59.276 06:06:29 -- common/autotest_common.sh@1104 -- # run_qos_test 20000 IOPS Malloc_0 00:13:59.276 06:06:29 -- bdev/blockdev.sh@387 -- # local qos_limit=20000 00:13:59.276 06:06:29 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:59.276 06:06:29 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:59.276 06:06:29 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:59.276 06:06:29 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:59.276 06:06:29 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:59.276 06:06:29 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:59.276 06:06:29 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:59.276 06:06:29 -- bdev/blockdev.sh@376 -- # tail -1 00:14:04.539 06:06:34 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 19870.21 79480.83 0.00 0.00 80560.00 0.00 0.00 ' 00:14:04.539 06:06:34 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:14:04.539 06:06:34 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:14:04.539 06:06:34 -- bdev/blockdev.sh@378 -- # iostat_result=19870.21 00:14:04.539 06:06:34 -- bdev/blockdev.sh@383 -- # echo 19870 00:14:04.539 06:06:34 -- bdev/blockdev.sh@390 -- # qos_result=19870 00:14:04.539 06:06:34 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:14:04.539 06:06:34 -- bdev/blockdev.sh@394 -- # lower_limit=18000 00:14:04.539 06:06:34 -- bdev/blockdev.sh@395 -- # upper_limit=22000 00:14:04.539 06:06:34 -- bdev/blockdev.sh@398 -- # '[' 19870 -lt 18000 ']' 00:14:04.539 06:06:34 -- bdev/blockdev.sh@398 -- # '[' 19870 -gt 22000 ']' 00:14:04.539 00:14:04.539 real 0m5.227s 00:14:04.539 user 0m0.130s 00:14:04.539 sys 0m0.040s 00:14:04.539 06:06:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:04.539 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:14:04.539 ************************************ 00:14:04.539 END TEST bdev_qos_iops 00:14:04.539 ************************************ 00:14:04.539 06:06:34 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:14:04.539 06:06:34 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:04.539 06:06:34 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:04.539 06:06:34 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:04.539 06:06:34 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:04.539 06:06:34 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:04.539 06:06:34 -- bdev/blockdev.sh@376 -- # tail -1 00:14:09.811 06:06:40 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 30273.34 121093.36 0.00 0.00 122880.00 0.00 0.00 ' 00:14:09.811 06:06:40 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:09.811 06:06:40 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:09.811 06:06:40 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:09.811 06:06:40 -- bdev/blockdev.sh@380 -- # iostat_result=122880.00 00:14:09.811 06:06:40 -- bdev/blockdev.sh@383 -- # echo 122880 00:14:09.811 06:06:40 -- bdev/blockdev.sh@425 -- # bw_limit=122880 00:14:09.811 06:06:40 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:14:09.811 06:06:40 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:14:09.811 06:06:40 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:09.811 06:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.811 06:06:40 -- common/autotest_common.sh@10 -- # set +x 00:14:09.811 06:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.811 06:06:40 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:09.811 06:06:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:09.811 06:06:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.811 06:06:40 -- common/autotest_common.sh@10 -- # set +x 00:14:09.811 ************************************ 00:14:09.811 START TEST bdev_qos_bw 00:14:09.811 ************************************ 00:14:09.811 06:06:40 -- common/autotest_common.sh@1104 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:09.811 06:06:40 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:14:09.811 06:06:40 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:09.811 06:06:40 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:09.811 06:06:40 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:09.811 06:06:40 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:09.811 06:06:40 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:09.811 06:06:40 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:09.811 06:06:40 -- bdev/blockdev.sh@376 -- # tail -1 00:14:09.811 06:06:40 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:15.073 06:06:45 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3072.54 12290.17 0.00 0.00 12568.00 0.00 0.00 ' 00:14:15.074 06:06:45 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:15.074 06:06:45 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:15.074 06:06:45 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:15.074 06:06:45 -- bdev/blockdev.sh@380 -- # iostat_result=12568.00 00:14:15.074 06:06:45 -- bdev/blockdev.sh@383 -- # echo 12568 00:14:15.074 06:06:45 -- bdev/blockdev.sh@390 -- # qos_result=12568 00:14:15.074 06:06:45 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:15.074 06:06:45 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:14:15.074 06:06:45 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:14:15.074 06:06:45 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:14:15.074 06:06:45 -- bdev/blockdev.sh@398 -- # '[' 12568 -lt 11059 ']' 00:14:15.074 06:06:45 -- bdev/blockdev.sh@398 -- # '[' 12568 -gt 13516 ']' 00:14:15.074 00:14:15.074 real 0m5.271s 00:14:15.074 user 0m0.115s 00:14:15.074 sys 0m0.048s 00:14:15.074 06:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.074 06:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.074 ************************************ 00:14:15.074 END TEST bdev_qos_bw 00:14:15.074 ************************************ 00:14:15.074 06:06:45 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:15.074 06:06:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.074 06:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.074 06:06:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.074 06:06:45 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:15.074 06:06:45 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:15.074 06:06:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:15.074 06:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.074 ************************************ 00:14:15.074 START TEST bdev_qos_ro_bw 00:14:15.074 ************************************ 00:14:15.074 06:06:45 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:15.074 06:06:45 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:15.074 06:06:45 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:15.074 06:06:45 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:15.074 06:06:45 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:15.074 06:06:45 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:15.074 06:06:45 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:15.074 06:06:45 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:15.074 06:06:45 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:15.074 06:06:45 -- bdev/blockdev.sh@376 -- # tail -1 00:14:20.386 06:06:50 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.87 2047.50 0.00 0.00 2068.00 0.00 0.00 ' 00:14:20.386 06:06:50 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:20.386 06:06:50 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:20.386 06:06:50 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:20.386 06:06:50 -- bdev/blockdev.sh@380 -- # iostat_result=2068.00 00:14:20.386 06:06:50 -- bdev/blockdev.sh@383 -- # echo 2068 00:14:20.386 06:06:50 -- bdev/blockdev.sh@390 -- # qos_result=2068 00:14:20.386 06:06:50 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:20.386 06:06:50 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:20.386 06:06:50 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:20.386 06:06:50 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:20.386 06:06:50 -- bdev/blockdev.sh@398 -- # '[' 2068 -lt 1843 ']' 00:14:20.386 06:06:50 -- bdev/blockdev.sh@398 -- # '[' 2068 -gt 2252 ']' 00:14:20.386 00:14:20.386 real 0m5.181s 00:14:20.386 user 0m0.114s 00:14:20.386 sys 0m0.045s 00:14:20.386 06:06:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:20.386 06:06:50 -- common/autotest_common.sh@10 -- # set +x 00:14:20.386 ************************************ 00:14:20.386 END TEST bdev_qos_ro_bw 00:14:20.386 ************************************ 00:14:20.387 06:06:50 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:20.387 06:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.387 06:06:50 -- common/autotest_common.sh@10 -- # set +x 00:14:20.953 06:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.953 06:06:51 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:20.953 06:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.953 06:06:51 -- common/autotest_common.sh@10 -- # set +x 00:14:21.212 00:14:21.212 Latency(us) 00:14:21.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.212 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:21.212 Malloc_0 : 26.82 27846.04 108.77 0.00 0.00 9106.34 1771.03 503316.48 00:14:21.212 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:21.212 Null_1 : 27.06 28638.54 111.87 0.00 0.00 8920.99 565.64 235679.94 00:14:21.212 =================================================================================================================== 00:14:21.212 Total : 56484.58 220.64 0.00 0.00 9011.93 565.64 503316.48 00:14:21.212 0 00:14:21.212 06:06:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.212 06:06:51 -- bdev/blockdev.sh@459 -- # killprocess 111435 00:14:21.212 06:06:51 -- common/autotest_common.sh@926 -- # '[' -z 111435 ']' 00:14:21.212 06:06:51 -- common/autotest_common.sh@930 -- # kill -0 111435 00:14:21.212 06:06:51 -- common/autotest_common.sh@931 -- # uname 00:14:21.212 06:06:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.212 06:06:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111435 00:14:21.212 06:06:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:21.212 killing process with pid 111435 00:14:21.212 06:06:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:21.212 06:06:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111435' 00:14:21.212 Received shutdown signal, test time was about 27.111800 seconds 00:14:21.212 00:14:21.212 Latency(us) 00:14:21.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.212 =================================================================================================================== 00:14:21.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.212 06:06:51 -- common/autotest_common.sh@945 -- # kill 111435 00:14:21.212 06:06:51 -- common/autotest_common.sh@950 -- # wait 111435 00:14:23.113 06:06:53 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:23.113 00:14:23.113 real 0m30.149s 00:14:23.113 user 0m30.756s 00:14:23.113 sys 0m0.922s 00:14:23.114 06:06:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.114 06:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.114 ************************************ 00:14:23.114 END TEST bdev_qos 00:14:23.114 ************************************ 00:14:23.114 06:06:53 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:23.114 06:06:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:23.114 06:06:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:23.114 06:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.114 ************************************ 00:14:23.114 START TEST bdev_qd_sampling 00:14:23.114 ************************************ 00:14:23.114 06:06:53 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:23.114 06:06:53 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:23.114 06:06:53 -- bdev/blockdev.sh@539 -- # QD_PID=111918 00:14:23.114 Process bdev QD sampling period testing pid: 111918 00:14:23.114 06:06:53 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 111918' 00:14:23.114 06:06:53 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:23.114 06:06:53 -- bdev/blockdev.sh@542 -- # waitforlisten 111918 00:14:23.114 06:06:53 -- common/autotest_common.sh@819 -- # '[' -z 111918 ']' 00:14:23.114 06:06:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.114 06:06:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.114 06:06:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.114 06:06:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.114 06:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.114 06:06:53 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:23.114 [2024-06-11 06:06:53.504298] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:23.114 [2024-06-11 06:06:53.504782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111918 ] 00:14:23.114 [2024-06-11 06:06:53.695315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.372 [2024-06-11 06:06:53.992963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.372 [2024-06-11 06:06:53.992964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.937 06:06:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.937 06:06:54 -- common/autotest_common.sh@852 -- # return 0 00:14:23.937 06:06:54 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:23.937 06:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.937 06:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.195 Malloc_QD 00:14:24.195 06:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.195 06:06:54 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:24.195 06:06:54 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:24.195 06:06:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:24.195 06:06:54 -- common/autotest_common.sh@889 -- # local i 00:14:24.195 06:06:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:24.195 06:06:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:24.195 06:06:54 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:24.195 06:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.195 06:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.195 06:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.195 06:06:54 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:24.195 06:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:24.195 06:06:54 -- common/autotest_common.sh@10 -- # set +x 00:14:24.195 [ 00:14:24.195 { 00:14:24.195 "name": "Malloc_QD", 00:14:24.195 "aliases": [ 00:14:24.195 "1a984c61-b4e6-488a-992f-79f1dbfb36ab" 00:14:24.195 ], 00:14:24.195 "product_name": "Malloc disk", 00:14:24.195 "block_size": 512, 00:14:24.195 "num_blocks": 262144, 00:14:24.195 "uuid": "1a984c61-b4e6-488a-992f-79f1dbfb36ab", 00:14:24.195 "assigned_rate_limits": { 00:14:24.195 "rw_ios_per_sec": 0, 00:14:24.195 "rw_mbytes_per_sec": 0, 00:14:24.195 "r_mbytes_per_sec": 0, 00:14:24.195 "w_mbytes_per_sec": 0 00:14:24.195 }, 00:14:24.195 "claimed": false, 00:14:24.195 "zoned": false, 00:14:24.195 "supported_io_types": { 00:14:24.195 "read": true, 00:14:24.195 "write": true, 00:14:24.195 "unmap": true, 00:14:24.195 "write_zeroes": true, 00:14:24.195 "flush": true, 00:14:24.195 "reset": true, 00:14:24.195 "compare": false, 00:14:24.195 "compare_and_write": false, 00:14:24.195 "abort": true, 00:14:24.195 "nvme_admin": false, 00:14:24.195 "nvme_io": false 00:14:24.195 }, 00:14:24.195 "memory_domains": [ 00:14:24.195 { 00:14:24.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.195 "dma_device_type": 2 00:14:24.195 } 00:14:24.195 ], 00:14:24.195 "driver_specific": {} 00:14:24.195 } 00:14:24.195 ] 00:14:24.195 06:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.195 06:06:54 -- common/autotest_common.sh@895 -- # return 0 00:14:24.195 06:06:54 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:24.195 06:06:54 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:24.195 Running I/O for 5 seconds... 00:14:26.097 06:06:56 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:26.097 06:06:56 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:26.097 06:06:56 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:26.097 06:06:56 -- bdev/blockdev.sh@519 -- # local iostats 00:14:26.097 06:06:56 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:26.097 06:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.097 06:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:26.097 06:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.097 06:06:56 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:26.097 06:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.097 06:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:26.097 06:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.097 06:06:56 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:26.097 "tick_rate": 2100000000, 00:14:26.097 "ticks": 1728826309462, 00:14:26.097 "bdevs": [ 00:14:26.097 { 00:14:26.097 "name": "Malloc_QD", 00:14:26.097 "bytes_read": 907055616, 00:14:26.097 "num_read_ops": 221443, 00:14:26.097 "bytes_written": 0, 00:14:26.097 "num_write_ops": 0, 00:14:26.097 "bytes_unmapped": 0, 00:14:26.097 "num_unmap_ops": 0, 00:14:26.097 "bytes_copied": 0, 00:14:26.097 "num_copy_ops": 0, 00:14:26.097 "read_latency_ticks": 2045846971064, 00:14:26.097 "max_read_latency_ticks": 14276348, 00:14:26.097 "min_read_latency_ticks": 308648, 00:14:26.097 "write_latency_ticks": 0, 00:14:26.097 "max_write_latency_ticks": 0, 00:14:26.097 "min_write_latency_ticks": 0, 00:14:26.097 "unmap_latency_ticks": 0, 00:14:26.097 "max_unmap_latency_ticks": 0, 00:14:26.097 "min_unmap_latency_ticks": 0, 00:14:26.097 "copy_latency_ticks": 0, 00:14:26.097 "max_copy_latency_ticks": 0, 00:14:26.097 "min_copy_latency_ticks": 0, 00:14:26.097 "io_error": {}, 00:14:26.097 "queue_depth_polling_period": 10, 00:14:26.097 "queue_depth": 512, 00:14:26.097 "io_time": 30, 00:14:26.097 "weighted_io_time": 15360 00:14:26.097 } 00:14:26.097 ] 00:14:26.097 }' 00:14:26.097 06:06:56 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:26.097 06:06:56 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:26.097 06:06:56 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:26.097 06:06:56 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:26.097 06:06:56 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:26.097 06:06:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:26.097 06:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:26.097 00:14:26.097 Latency(us) 00:14:26.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.097 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:26.097 Malloc_QD : 1.97 57915.26 226.23 0.00 0.00 4410.17 1053.26 6803.26 00:14:26.097 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:26.097 Malloc_QD : 1.97 58272.70 227.63 0.00 0.00 4383.30 667.06 4930.80 00:14:26.097 =================================================================================================================== 00:14:26.097 Total : 116187.96 453.86 0.00 0.00 4396.69 667.06 6803.26 00:14:26.355 0 00:14:26.355 06:06:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:26.355 06:06:56 -- bdev/blockdev.sh@552 -- # killprocess 111918 00:14:26.355 06:06:56 -- common/autotest_common.sh@926 -- # '[' -z 111918 ']' 00:14:26.355 06:06:56 -- common/autotest_common.sh@930 -- # kill -0 111918 00:14:26.355 06:06:56 -- common/autotest_common.sh@931 -- # uname 00:14:26.355 06:06:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:26.355 06:06:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111918 00:14:26.355 06:06:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:26.356 06:06:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:26.356 killing process with pid 111918 00:14:26.356 Received shutdown signal, test time was about 2.159483 seconds 00:14:26.356 00:14:26.356 Latency(us) 00:14:26.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.356 =================================================================================================================== 00:14:26.356 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.356 06:06:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111918' 00:14:26.356 06:06:56 -- common/autotest_common.sh@945 -- # kill 111918 00:14:26.356 06:06:56 -- common/autotest_common.sh@950 -- # wait 111918 00:14:28.258 06:06:58 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:28.258 00:14:28.258 real 0m5.156s 00:14:28.258 user 0m9.221s 00:14:28.258 sys 0m0.567s 00:14:28.258 ************************************ 00:14:28.258 END TEST bdev_qd_sampling 00:14:28.258 ************************************ 00:14:28.258 06:06:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.258 06:06:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.258 06:06:58 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:28.258 06:06:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:28.258 06:06:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.258 06:06:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.259 ************************************ 00:14:28.259 START TEST bdev_error 00:14:28.259 ************************************ 00:14:28.259 06:06:58 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:14:28.259 06:06:58 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:28.259 06:06:58 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:28.259 06:06:58 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:28.259 06:06:58 -- bdev/blockdev.sh@470 -- # ERR_PID=112014 00:14:28.259 Process error testing pid: 112014 00:14:28.259 06:06:58 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 112014' 00:14:28.259 06:06:58 -- bdev/blockdev.sh@472 -- # waitforlisten 112014 00:14:28.259 06:06:58 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:28.259 06:06:58 -- common/autotest_common.sh@819 -- # '[' -z 112014 ']' 00:14:28.259 06:06:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.259 06:06:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:28.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.259 06:06:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.259 06:06:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:28.259 06:06:58 -- common/autotest_common.sh@10 -- # set +x 00:14:28.259 [2024-06-11 06:06:58.735114] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:28.259 [2024-06-11 06:06:58.735402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112014 ] 00:14:28.517 [2024-06-11 06:06:58.918757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.776 [2024-06-11 06:06:59.167554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.035 06:06:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:29.035 06:06:59 -- common/autotest_common.sh@852 -- # return 0 00:14:29.035 06:06:59 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:29.035 06:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.035 06:06:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.294 Dev_1 00:14:29.294 06:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.294 06:06:59 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:29.294 06:06:59 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:29.294 06:06:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.294 06:06:59 -- common/autotest_common.sh@889 -- # local i 00:14:29.294 06:06:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.294 06:06:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.294 06:06:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:29.294 06:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.294 06:06:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.294 06:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.294 06:06:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:29.294 06:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.294 06:06:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.294 [ 00:14:29.294 { 00:14:29.294 "name": "Dev_1", 00:14:29.294 "aliases": [ 00:14:29.294 "50abea9e-8f65-493e-9b73-57459d0d0212" 00:14:29.294 ], 00:14:29.294 "product_name": "Malloc disk", 00:14:29.294 "block_size": 512, 00:14:29.294 "num_blocks": 262144, 00:14:29.294 "uuid": "50abea9e-8f65-493e-9b73-57459d0d0212", 00:14:29.294 "assigned_rate_limits": { 00:14:29.294 "rw_ios_per_sec": 0, 00:14:29.294 "rw_mbytes_per_sec": 0, 00:14:29.294 "r_mbytes_per_sec": 0, 00:14:29.294 "w_mbytes_per_sec": 0 00:14:29.294 }, 00:14:29.294 "claimed": false, 00:14:29.294 "zoned": false, 00:14:29.294 "supported_io_types": { 00:14:29.294 "read": true, 00:14:29.294 "write": true, 00:14:29.294 "unmap": true, 00:14:29.294 "write_zeroes": true, 00:14:29.294 "flush": true, 00:14:29.294 "reset": true, 00:14:29.294 "compare": false, 00:14:29.294 "compare_and_write": false, 00:14:29.294 "abort": true, 00:14:29.294 "nvme_admin": false, 00:14:29.294 "nvme_io": false 00:14:29.294 }, 00:14:29.294 "memory_domains": [ 00:14:29.294 { 00:14:29.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.294 "dma_device_type": 2 00:14:29.294 } 00:14:29.294 ], 00:14:29.294 "driver_specific": {} 00:14:29.294 } 00:14:29.294 ] 00:14:29.294 06:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.294 06:06:59 -- common/autotest_common.sh@895 -- # return 0 00:14:29.294 06:06:59 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:29.294 06:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.294 06:06:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.294 true 00:14:29.294 06:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.294 06:06:59 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:29.294 06:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.294 06:06:59 -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 Dev_2 00:14:29.552 06:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.552 06:07:00 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:29.552 06:07:00 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:29.552 06:07:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:29.552 06:07:00 -- common/autotest_common.sh@889 -- # local i 00:14:29.552 06:07:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:29.552 06:07:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:29.552 06:07:00 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:29.552 06:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.552 06:07:00 -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 06:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.552 06:07:00 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:29.552 06:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.552 06:07:00 -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 [ 00:14:29.552 { 00:14:29.552 "name": "Dev_2", 00:14:29.552 "aliases": [ 00:14:29.552 "69c04304-f380-4bc2-84cd-4b8950c9d61d" 00:14:29.552 ], 00:14:29.552 "product_name": "Malloc disk", 00:14:29.552 "block_size": 512, 00:14:29.552 "num_blocks": 262144, 00:14:29.552 "uuid": "69c04304-f380-4bc2-84cd-4b8950c9d61d", 00:14:29.552 "assigned_rate_limits": { 00:14:29.552 "rw_ios_per_sec": 0, 00:14:29.552 "rw_mbytes_per_sec": 0, 00:14:29.552 "r_mbytes_per_sec": 0, 00:14:29.552 "w_mbytes_per_sec": 0 00:14:29.552 }, 00:14:29.552 "claimed": false, 00:14:29.552 "zoned": false, 00:14:29.552 "supported_io_types": { 00:14:29.552 "read": true, 00:14:29.552 "write": true, 00:14:29.552 "unmap": true, 00:14:29.552 "write_zeroes": true, 00:14:29.552 "flush": true, 00:14:29.552 "reset": true, 00:14:29.552 "compare": false, 00:14:29.552 "compare_and_write": false, 00:14:29.552 "abort": true, 00:14:29.552 "nvme_admin": false, 00:14:29.552 "nvme_io": false 00:14:29.552 }, 00:14:29.552 "memory_domains": [ 00:14:29.552 { 00:14:29.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.552 "dma_device_type": 2 00:14:29.552 } 00:14:29.552 ], 00:14:29.552 "driver_specific": {} 00:14:29.552 } 00:14:29.552 ] 00:14:29.552 06:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.552 06:07:00 -- common/autotest_common.sh@895 -- # return 0 00:14:29.552 06:07:00 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:29.552 06:07:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:29.552 06:07:00 -- common/autotest_common.sh@10 -- # set +x 00:14:29.552 06:07:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:29.552 06:07:00 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:29.552 06:07:00 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:29.552 Running I/O for 5 seconds... 00:14:30.486 06:07:01 -- bdev/blockdev.sh@485 -- # kill -0 112014 00:14:30.486 Process is existed as continue on error is set. Pid: 112014 00:14:30.486 06:07:01 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 112014' 00:14:30.486 06:07:01 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:30.486 06:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.486 06:07:01 -- common/autotest_common.sh@10 -- # set +x 00:14:30.486 06:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:30.486 06:07:01 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:30.486 06:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:30.486 06:07:01 -- common/autotest_common.sh@10 -- # set +x 00:14:30.745 Timeout while waiting for response: 00:14:30.745 00:14:30.745 00:14:31.003 06:07:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.003 06:07:01 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:35.186 00:14:35.186 Latency(us) 00:14:35.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.186 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:35.187 EE_Dev_1 : 0.90 49290.33 192.54 5.55 0.00 322.29 122.39 635.86 00:14:35.187 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:35.187 Dev_2 : 5.00 98147.34 383.39 0.00 0.00 160.79 77.53 393465.66 00:14:35.187 =================================================================================================================== 00:14:35.187 Total : 147437.67 575.93 5.55 0.00 174.20 77.53 393465.66 00:14:36.120 06:07:06 -- bdev/blockdev.sh@497 -- # killprocess 112014 00:14:36.120 06:07:06 -- common/autotest_common.sh@926 -- # '[' -z 112014 ']' 00:14:36.120 06:07:06 -- common/autotest_common.sh@930 -- # kill -0 112014 00:14:36.120 06:07:06 -- common/autotest_common.sh@931 -- # uname 00:14:36.120 06:07:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:36.120 06:07:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112014 00:14:36.120 06:07:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:36.120 killing process with pid 112014 00:14:36.121 06:07:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:36.121 06:07:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112014' 00:14:36.121 06:07:06 -- common/autotest_common.sh@945 -- # kill 112014 00:14:36.121 Received shutdown signal, test time was about 5.000000 seconds 00:14:36.121 00:14:36.121 Latency(us) 00:14:36.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.121 =================================================================================================================== 00:14:36.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.121 06:07:06 -- common/autotest_common.sh@950 -- # wait 112014 00:14:38.023 06:07:08 -- bdev/blockdev.sh@501 -- # ERR_PID=112144 00:14:38.023 06:07:08 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:38.023 Process error testing pid: 112144 00:14:38.023 06:07:08 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 112144' 00:14:38.023 06:07:08 -- bdev/blockdev.sh@503 -- # waitforlisten 112144 00:14:38.023 06:07:08 -- common/autotest_common.sh@819 -- # '[' -z 112144 ']' 00:14:38.023 06:07:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.023 06:07:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.023 06:07:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.023 06:07:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.023 06:07:08 -- common/autotest_common.sh@10 -- # set +x 00:14:38.023 [2024-06-11 06:07:08.380746] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:38.023 [2024-06-11 06:07:08.380956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112144 ] 00:14:38.023 [2024-06-11 06:07:08.541143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.281 [2024-06-11 06:07:08.774523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.848 06:07:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:38.848 06:07:09 -- common/autotest_common.sh@852 -- # return 0 00:14:38.848 06:07:09 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:38.848 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.848 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.848 Dev_1 00:14:38.848 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.848 06:07:09 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:38.848 06:07:09 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:38.848 06:07:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.848 06:07:09 -- common/autotest_common.sh@889 -- # local i 00:14:38.848 06:07:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.848 06:07:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.848 06:07:09 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:38.848 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.848 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.849 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.849 06:07:09 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:38.849 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.849 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.849 [ 00:14:38.849 { 00:14:38.849 "name": "Dev_1", 00:14:38.849 "aliases": [ 00:14:38.849 "d7c2c97c-ba1b-41de-81f4-7018c5440d15" 00:14:38.849 ], 00:14:38.849 "product_name": "Malloc disk", 00:14:38.849 "block_size": 512, 00:14:38.849 "num_blocks": 262144, 00:14:38.849 "uuid": "d7c2c97c-ba1b-41de-81f4-7018c5440d15", 00:14:38.849 "assigned_rate_limits": { 00:14:38.849 "rw_ios_per_sec": 0, 00:14:38.849 "rw_mbytes_per_sec": 0, 00:14:38.849 "r_mbytes_per_sec": 0, 00:14:38.849 "w_mbytes_per_sec": 0 00:14:38.849 }, 00:14:38.849 "claimed": false, 00:14:38.849 "zoned": false, 00:14:38.849 "supported_io_types": { 00:14:38.849 "read": true, 00:14:38.849 "write": true, 00:14:38.849 "unmap": true, 00:14:38.849 "write_zeroes": true, 00:14:38.849 "flush": true, 00:14:38.849 "reset": true, 00:14:38.849 "compare": false, 00:14:38.849 "compare_and_write": false, 00:14:38.849 "abort": true, 00:14:38.849 "nvme_admin": false, 00:14:38.849 "nvme_io": false 00:14:38.849 }, 00:14:38.849 "memory_domains": [ 00:14:38.849 { 00:14:38.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.849 "dma_device_type": 2 00:14:38.849 } 00:14:38.849 ], 00:14:38.849 "driver_specific": {} 00:14:38.849 } 00:14:38.849 ] 00:14:38.849 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.849 06:07:09 -- common/autotest_common.sh@895 -- # return 0 00:14:38.849 06:07:09 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:38.849 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.849 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:38.849 true 00:14:38.849 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.849 06:07:09 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:38.849 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.849 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 Dev_2 00:14:39.108 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.108 06:07:09 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:39.108 06:07:09 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:39.108 06:07:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:39.108 06:07:09 -- common/autotest_common.sh@889 -- # local i 00:14:39.108 06:07:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:39.108 06:07:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:39.108 06:07:09 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:39.108 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.108 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.108 06:07:09 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:39.108 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.108 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 [ 00:14:39.108 { 00:14:39.108 "name": "Dev_2", 00:14:39.108 "aliases": [ 00:14:39.108 "521101bc-1f5c-4060-b3e0-4d3ca95609d8" 00:14:39.108 ], 00:14:39.108 "product_name": "Malloc disk", 00:14:39.108 "block_size": 512, 00:14:39.108 "num_blocks": 262144, 00:14:39.108 "uuid": "521101bc-1f5c-4060-b3e0-4d3ca95609d8", 00:14:39.108 "assigned_rate_limits": { 00:14:39.108 "rw_ios_per_sec": 0, 00:14:39.108 "rw_mbytes_per_sec": 0, 00:14:39.108 "r_mbytes_per_sec": 0, 00:14:39.108 "w_mbytes_per_sec": 0 00:14:39.108 }, 00:14:39.108 "claimed": false, 00:14:39.108 "zoned": false, 00:14:39.108 "supported_io_types": { 00:14:39.108 "read": true, 00:14:39.108 "write": true, 00:14:39.108 "unmap": true, 00:14:39.108 "write_zeroes": true, 00:14:39.108 "flush": true, 00:14:39.108 "reset": true, 00:14:39.108 "compare": false, 00:14:39.108 "compare_and_write": false, 00:14:39.108 "abort": true, 00:14:39.108 "nvme_admin": false, 00:14:39.108 "nvme_io": false 00:14:39.108 }, 00:14:39.108 "memory_domains": [ 00:14:39.108 { 00:14:39.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.108 "dma_device_type": 2 00:14:39.108 } 00:14:39.108 ], 00:14:39.108 "driver_specific": {} 00:14:39.108 } 00:14:39.108 ] 00:14:39.108 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.108 06:07:09 -- common/autotest_common.sh@895 -- # return 0 00:14:39.108 06:07:09 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:39.108 06:07:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.108 06:07:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 06:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.108 06:07:09 -- bdev/blockdev.sh@513 -- # NOT wait 112144 00:14:39.108 06:07:09 -- common/autotest_common.sh@640 -- # local es=0 00:14:39.108 06:07:09 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 112144 00:14:39.108 06:07:09 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:39.108 06:07:09 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:39.108 06:07:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:39.108 06:07:09 -- common/autotest_common.sh@632 -- # type -t wait 00:14:39.108 06:07:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:39.108 06:07:09 -- common/autotest_common.sh@643 -- # wait 112144 00:14:39.366 Running I/O for 5 seconds... 00:14:39.367 task offset: 38520 on job bdev=EE_Dev_1 fails 00:14:39.367 00:14:39.367 Latency(us) 00:14:39.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.367 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:39.367 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:39.367 EE_Dev_1 : 0.00 33485.54 130.80 7610.35 0.00 314.71 117.03 569.54 00:14:39.367 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:39.367 Dev_2 : 0.00 23021.58 89.93 0.00 0.00 485.15 115.08 893.32 00:14:39.367 =================================================================================================================== 00:14:39.367 Total : 56507.12 220.73 7610.35 0.00 407.15 115.08 893.32 00:14:39.367 [2024-06-11 06:07:09.782397] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:39.367 request: 00:14:39.367 { 00:14:39.367 "method": "perform_tests", 00:14:39.367 "req_id": 1 00:14:39.367 } 00:14:39.367 Got JSON-RPC error response 00:14:39.367 response: 00:14:39.367 { 00:14:39.367 "code": -32603, 00:14:39.367 "message": "bdevperf failed with error Operation not permitted" 00:14:39.367 } 00:14:41.906 06:07:11 -- common/autotest_common.sh@643 -- # es=255 00:14:41.906 06:07:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:41.906 06:07:11 -- common/autotest_common.sh@652 -- # es=127 00:14:41.906 06:07:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:41.906 06:07:11 -- common/autotest_common.sh@660 -- # es=1 00:14:41.906 06:07:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:41.906 00:14:41.906 real 0m13.280s 00:14:41.906 user 0m13.110s 00:14:41.906 sys 0m1.226s 00:14:41.906 06:07:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.906 06:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.906 ************************************ 00:14:41.906 END TEST bdev_error 00:14:41.906 ************************************ 00:14:41.906 06:07:11 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:41.906 06:07:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:41.906 06:07:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.906 06:07:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.906 ************************************ 00:14:41.906 START TEST bdev_stat 00:14:41.906 ************************************ 00:14:41.906 06:07:12 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:41.906 06:07:12 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:41.906 06:07:12 -- bdev/blockdev.sh@594 -- # STAT_PID=112214 00:14:41.906 Process Bdev IO statistics testing pid: 112214 00:14:41.906 06:07:12 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 112214' 00:14:41.906 06:07:12 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:41.906 06:07:12 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:41.906 06:07:12 -- bdev/blockdev.sh@597 -- # waitforlisten 112214 00:14:41.906 06:07:12 -- common/autotest_common.sh@819 -- # '[' -z 112214 ']' 00:14:41.906 06:07:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.906 06:07:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:41.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.906 06:07:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.906 06:07:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:41.906 06:07:12 -- common/autotest_common.sh@10 -- # set +x 00:14:41.906 [2024-06-11 06:07:12.072636] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:41.906 [2024-06-11 06:07:12.072824] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112214 ] 00:14:41.906 [2024-06-11 06:07:12.246353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:41.906 [2024-06-11 06:07:12.537159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.906 [2024-06-11 06:07:12.537169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.473 06:07:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.473 06:07:12 -- common/autotest_common.sh@852 -- # return 0 00:14:42.473 06:07:12 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:42.473 06:07:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.473 06:07:12 -- common/autotest_common.sh@10 -- # set +x 00:14:42.473 Malloc_STAT 00:14:42.473 06:07:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.473 06:07:13 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:42.473 06:07:13 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:42.473 06:07:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:42.473 06:07:13 -- common/autotest_common.sh@889 -- # local i 00:14:42.473 06:07:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:42.473 06:07:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:42.473 06:07:13 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:42.473 06:07:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.473 06:07:13 -- common/autotest_common.sh@10 -- # set +x 00:14:42.473 06:07:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.473 06:07:13 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:42.473 06:07:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.473 06:07:13 -- common/autotest_common.sh@10 -- # set +x 00:14:42.731 [ 00:14:42.731 { 00:14:42.731 "name": "Malloc_STAT", 00:14:42.731 "aliases": [ 00:14:42.731 "2c4ee146-9164-474d-a2d3-cdd6e9b73fcf" 00:14:42.731 ], 00:14:42.731 "product_name": "Malloc disk", 00:14:42.731 "block_size": 512, 00:14:42.731 "num_blocks": 262144, 00:14:42.731 "uuid": "2c4ee146-9164-474d-a2d3-cdd6e9b73fcf", 00:14:42.731 "assigned_rate_limits": { 00:14:42.731 "rw_ios_per_sec": 0, 00:14:42.731 "rw_mbytes_per_sec": 0, 00:14:42.731 "r_mbytes_per_sec": 0, 00:14:42.731 "w_mbytes_per_sec": 0 00:14:42.731 }, 00:14:42.731 "claimed": false, 00:14:42.731 "zoned": false, 00:14:42.731 "supported_io_types": { 00:14:42.731 "read": true, 00:14:42.731 "write": true, 00:14:42.731 "unmap": true, 00:14:42.731 "write_zeroes": true, 00:14:42.731 "flush": true, 00:14:42.731 "reset": true, 00:14:42.731 "compare": false, 00:14:42.731 "compare_and_write": false, 00:14:42.731 "abort": true, 00:14:42.731 "nvme_admin": false, 00:14:42.731 "nvme_io": false 00:14:42.731 }, 00:14:42.731 "memory_domains": [ 00:14:42.731 { 00:14:42.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.731 "dma_device_type": 2 00:14:42.731 } 00:14:42.731 ], 00:14:42.731 "driver_specific": {} 00:14:42.731 } 00:14:42.731 ] 00:14:42.731 06:07:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.731 06:07:13 -- common/autotest_common.sh@895 -- # return 0 00:14:42.731 06:07:13 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:42.731 06:07:13 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:42.731 Running I/O for 10 seconds... 00:14:44.630 06:07:15 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:44.630 06:07:15 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:44.630 06:07:15 -- bdev/blockdev.sh@558 -- # local iostats 00:14:44.630 06:07:15 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:44.630 06:07:15 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:44.630 06:07:15 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:44.630 06:07:15 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:44.630 06:07:15 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:44.630 06:07:15 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:44.630 06:07:15 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:44.630 06:07:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.630 06:07:15 -- common/autotest_common.sh@10 -- # set +x 00:14:44.630 06:07:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.630 06:07:15 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:44.630 "tick_rate": 2100000000, 00:14:44.630 "ticks": 1767657160334, 00:14:44.630 "bdevs": [ 00:14:44.630 { 00:14:44.630 "name": "Malloc_STAT", 00:14:44.630 "bytes_read": 918589952, 00:14:44.630 "num_read_ops": 224259, 00:14:44.630 "bytes_written": 0, 00:14:44.630 "num_write_ops": 0, 00:14:44.630 "bytes_unmapped": 0, 00:14:44.631 "num_unmap_ops": 0, 00:14:44.631 "bytes_copied": 0, 00:14:44.631 "num_copy_ops": 0, 00:14:44.631 "read_latency_ticks": 2037361209012, 00:14:44.631 "max_read_latency_ticks": 12910120, 00:14:44.631 "min_read_latency_ticks": 324916, 00:14:44.631 "write_latency_ticks": 0, 00:14:44.631 "max_write_latency_ticks": 0, 00:14:44.631 "min_write_latency_ticks": 0, 00:14:44.631 "unmap_latency_ticks": 0, 00:14:44.631 "max_unmap_latency_ticks": 0, 00:14:44.631 "min_unmap_latency_ticks": 0, 00:14:44.631 "copy_latency_ticks": 0, 00:14:44.631 "max_copy_latency_ticks": 0, 00:14:44.631 "min_copy_latency_ticks": 0, 00:14:44.631 "io_error": {} 00:14:44.631 } 00:14:44.631 ] 00:14:44.631 }' 00:14:44.631 06:07:15 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:44.631 06:07:15 -- bdev/blockdev.sh@567 -- # io_count1=224259 00:14:44.631 06:07:15 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:44.631 06:07:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.631 06:07:15 -- common/autotest_common.sh@10 -- # set +x 00:14:44.631 06:07:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.631 06:07:15 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:44.631 "tick_rate": 2100000000, 00:14:44.631 "ticks": 1767762810464, 00:14:44.631 "name": "Malloc_STAT", 00:14:44.631 "channels": [ 00:14:44.631 { 00:14:44.631 "thread_id": 2, 00:14:44.631 "bytes_read": 468713472, 00:14:44.631 "num_read_ops": 114432, 00:14:44.631 "bytes_written": 0, 00:14:44.631 "num_write_ops": 0, 00:14:44.631 "bytes_unmapped": 0, 00:14:44.631 "num_unmap_ops": 0, 00:14:44.631 "bytes_copied": 0, 00:14:44.631 "num_copy_ops": 0, 00:14:44.631 "read_latency_ticks": 1045065499178, 00:14:44.631 "max_read_latency_ticks": 13310020, 00:14:44.631 "min_read_latency_ticks": 7035520, 00:14:44.631 "write_latency_ticks": 0, 00:14:44.631 "max_write_latency_ticks": 0, 00:14:44.631 "min_write_latency_ticks": 0, 00:14:44.631 "unmap_latency_ticks": 0, 00:14:44.631 "max_unmap_latency_ticks": 0, 00:14:44.631 "min_unmap_latency_ticks": 0, 00:14:44.631 "copy_latency_ticks": 0, 00:14:44.631 "max_copy_latency_ticks": 0, 00:14:44.631 "min_copy_latency_ticks": 0 00:14:44.631 }, 00:14:44.631 { 00:14:44.631 "thread_id": 3, 00:14:44.631 "bytes_read": 473956352, 00:14:44.631 "num_read_ops": 115712, 00:14:44.631 "bytes_written": 0, 00:14:44.631 "num_write_ops": 0, 00:14:44.631 "bytes_unmapped": 0, 00:14:44.631 "num_unmap_ops": 0, 00:14:44.631 "bytes_copied": 0, 00:14:44.631 "num_copy_ops": 0, 00:14:44.631 "read_latency_ticks": 1046976119244, 00:14:44.631 "max_read_latency_ticks": 10003744, 00:14:44.631 "min_read_latency_ticks": 5961464, 00:14:44.631 "write_latency_ticks": 0, 00:14:44.631 "max_write_latency_ticks": 0, 00:14:44.631 "min_write_latency_ticks": 0, 00:14:44.631 "unmap_latency_ticks": 0, 00:14:44.631 "max_unmap_latency_ticks": 0, 00:14:44.631 "min_unmap_latency_ticks": 0, 00:14:44.631 "copy_latency_ticks": 0, 00:14:44.631 "max_copy_latency_ticks": 0, 00:14:44.631 "min_copy_latency_ticks": 0 00:14:44.631 } 00:14:44.631 ] 00:14:44.631 }' 00:14:44.631 06:07:15 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:44.631 06:07:15 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=114432 00:14:44.631 06:07:15 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=114432 00:14:44.631 06:07:15 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:44.889 06:07:15 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=115712 00:14:44.889 06:07:15 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=230144 00:14:44.889 06:07:15 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:44.889 06:07:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.889 06:07:15 -- common/autotest_common.sh@10 -- # set +x 00:14:44.889 06:07:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.889 06:07:15 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:44.889 "tick_rate": 2100000000, 00:14:44.889 "ticks": 1767992153876, 00:14:44.889 "bdevs": [ 00:14:44.889 { 00:14:44.889 "name": "Malloc_STAT", 00:14:44.889 "bytes_read": 995136000, 00:14:44.889 "num_read_ops": 242947, 00:14:44.889 "bytes_written": 0, 00:14:44.889 "num_write_ops": 0, 00:14:44.889 "bytes_unmapped": 0, 00:14:44.889 "num_unmap_ops": 0, 00:14:44.889 "bytes_copied": 0, 00:14:44.889 "num_copy_ops": 0, 00:14:44.889 "read_latency_ticks": 2210277727544, 00:14:44.889 "max_read_latency_ticks": 13310020, 00:14:44.889 "min_read_latency_ticks": 324916, 00:14:44.889 "write_latency_ticks": 0, 00:14:44.889 "max_write_latency_ticks": 0, 00:14:44.889 "min_write_latency_ticks": 0, 00:14:44.889 "unmap_latency_ticks": 0, 00:14:44.889 "max_unmap_latency_ticks": 0, 00:14:44.889 "min_unmap_latency_ticks": 0, 00:14:44.889 "copy_latency_ticks": 0, 00:14:44.889 "max_copy_latency_ticks": 0, 00:14:44.889 "min_copy_latency_ticks": 0, 00:14:44.889 "io_error": {} 00:14:44.889 } 00:14:44.889 ] 00:14:44.889 }' 00:14:44.889 06:07:15 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:44.889 06:07:15 -- bdev/blockdev.sh@576 -- # io_count2=242947 00:14:44.889 06:07:15 -- bdev/blockdev.sh@581 -- # '[' 230144 -lt 224259 ']' 00:14:44.889 06:07:15 -- bdev/blockdev.sh@581 -- # '[' 230144 -gt 242947 ']' 00:14:44.889 06:07:15 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:44.889 06:07:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.889 06:07:15 -- common/autotest_common.sh@10 -- # set +x 00:14:44.889 00:14:44.889 Latency(us) 00:14:44.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.889 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:44.889 Malloc_STAT : 2.12 58519.55 228.59 0.00 0.00 4365.32 1045.46 6553.60 00:14:44.889 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:44.889 Malloc_STAT : 2.12 59464.92 232.28 0.00 0.00 4295.80 678.77 4774.77 00:14:44.889 =================================================================================================================== 00:14:44.889 Total : 117984.48 460.88 0.00 0.00 4330.27 678.77 6553.60 00:14:44.889 0 00:14:44.889 06:07:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.889 06:07:15 -- bdev/blockdev.sh@607 -- # killprocess 112214 00:14:44.889 06:07:15 -- common/autotest_common.sh@926 -- # '[' -z 112214 ']' 00:14:44.889 06:07:15 -- common/autotest_common.sh@930 -- # kill -0 112214 00:14:44.889 06:07:15 -- common/autotest_common.sh@931 -- # uname 00:14:44.889 06:07:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.889 06:07:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112214 00:14:45.148 06:07:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:45.148 06:07:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:45.148 killing process with pid 112214 00:14:45.148 06:07:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112214' 00:14:45.148 06:07:15 -- common/autotest_common.sh@945 -- # kill 112214 00:14:45.148 Received shutdown signal, test time was about 2.306882 seconds 00:14:45.148 00:14:45.148 Latency(us) 00:14:45.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.148 =================================================================================================================== 00:14:45.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.148 06:07:15 -- common/autotest_common.sh@950 -- # wait 112214 00:14:47.049 06:07:17 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:47.049 00:14:47.049 real 0m5.203s 00:14:47.049 user 0m9.478s 00:14:47.049 sys 0m0.535s 00:14:47.049 06:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.049 06:07:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.049 ************************************ 00:14:47.049 END TEST bdev_stat 00:14:47.049 ************************************ 00:14:47.049 06:07:17 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:47.049 06:07:17 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:47.049 06:07:17 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:47.049 06:07:17 -- bdev/blockdev.sh@809 -- # cleanup 00:14:47.049 06:07:17 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:47.049 06:07:17 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:47.049 06:07:17 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:47.049 06:07:17 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:47.049 06:07:17 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:47.049 06:07:17 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:47.049 00:14:47.049 real 2m36.410s 00:14:47.049 user 6m1.796s 00:14:47.049 sys 0m26.686s 00:14:47.049 06:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.049 06:07:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.049 ************************************ 00:14:47.049 END TEST blockdev_general 00:14:47.049 ************************************ 00:14:47.049 06:07:17 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:47.049 06:07:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:47.049 06:07:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.049 06:07:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.049 ************************************ 00:14:47.049 START TEST bdev_raid 00:14:47.049 ************************************ 00:14:47.049 06:07:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:47.049 * Looking for test storage... 00:14:47.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:47.049 06:07:17 -- bdev/nbd_common.sh@6 -- # set -e 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:47.049 06:07:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.049 06:07:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.049 06:07:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.049 ************************************ 00:14:47.049 START TEST raid_function_test_raid0 00:14:47.049 ************************************ 00:14:47.049 06:07:17 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@86 -- # raid_pid=112372 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:47.049 Process raid pid: 112372 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 112372' 00:14:47.049 06:07:17 -- bdev/bdev_raid.sh@88 -- # waitforlisten 112372 /var/tmp/spdk-raid.sock 00:14:47.049 06:07:17 -- common/autotest_common.sh@819 -- # '[' -z 112372 ']' 00:14:47.049 06:07:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:47.049 06:07:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:47.049 06:07:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:47.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:47.049 06:07:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:47.049 06:07:17 -- common/autotest_common.sh@10 -- # set +x 00:14:47.049 [2024-06-11 06:07:17.581897] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:47.049 [2024-06-11 06:07:17.582224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.308 [2024-06-11 06:07:17.776887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.567 [2024-06-11 06:07:18.017062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.826 [2024-06-11 06:07:18.261876] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.085 06:07:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:48.085 06:07:18 -- common/autotest_common.sh@852 -- # return 0 00:14:48.085 06:07:18 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:48.085 06:07:18 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:48.085 06:07:18 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:48.085 06:07:18 -- bdev/bdev_raid.sh@70 -- # cat 00:14:48.085 06:07:18 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:48.344 [2024-06-11 06:07:18.918350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:48.344 [2024-06-11 06:07:18.920566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:48.344 [2024-06-11 06:07:18.920638] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:48.344 [2024-06-11 06:07:18.920648] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.344 [2024-06-11 06:07:18.920771] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:48.344 [2024-06-11 06:07:18.921131] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:48.344 [2024-06-11 06:07:18.921150] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:14:48.344 [2024-06-11 06:07:18.921297] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.344 Base_1 00:14:48.344 Base_2 00:14:48.344 06:07:18 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:48.344 06:07:18 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:48.344 06:07:18 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:48.603 06:07:19 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:48.603 06:07:19 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:48.603 06:07:19 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@12 -- # local i 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.603 06:07:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:48.868 [2024-06-11 06:07:19.426447] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:48.868 /dev/nbd0 00:14:48.868 06:07:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.868 06:07:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.868 06:07:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:48.868 06:07:19 -- common/autotest_common.sh@857 -- # local i 00:14:48.868 06:07:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:48.868 06:07:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:48.868 06:07:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:48.868 06:07:19 -- common/autotest_common.sh@861 -- # break 00:14:48.868 06:07:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:48.868 06:07:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:48.868 06:07:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.868 1+0 records in 00:14:48.868 1+0 records out 00:14:48.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262551 s, 15.6 MB/s 00:14:48.868 06:07:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.869 06:07:19 -- common/autotest_common.sh@874 -- # size=4096 00:14:48.869 06:07:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.869 06:07:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:48.869 06:07:19 -- common/autotest_common.sh@877 -- # return 0 00:14:48.869 06:07:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.869 06:07:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.869 06:07:19 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:48.869 06:07:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:48.869 06:07:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:49.128 06:07:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:49.128 { 00:14:49.128 "nbd_device": "/dev/nbd0", 00:14:49.128 "bdev_name": "raid" 00:14:49.128 } 00:14:49.128 ]' 00:14:49.128 06:07:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:49.128 { 00:14:49.128 "nbd_device": "/dev/nbd0", 00:14:49.128 "bdev_name": "raid" 00:14:49.128 } 00:14:49.128 ]' 00:14:49.128 06:07:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:49.387 06:07:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:49.387 06:07:19 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:49.387 06:07:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:49.387 06:07:19 -- bdev/nbd_common.sh@65 -- # count=1 00:14:49.387 06:07:19 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:49.387 4096+0 records in 00:14:49.387 4096+0 records out 00:14:49.387 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0264462 s, 79.3 MB/s 00:14:49.387 06:07:19 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:49.387 4096+0 records in 00:14:49.387 4096+0 records out 00:14:49.387 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.184748 s, 11.4 MB/s 00:14:49.387 06:07:20 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:49.387 06:07:20 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:49.647 128+0 records in 00:14:49.647 128+0 records out 00:14:49.647 65536 bytes (66 kB, 64 KiB) copied, 0.00123928 s, 52.9 MB/s 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:49.647 2035+0 records in 00:14:49.647 2035+0 records out 00:14:49.647 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00499531 s, 209 MB/s 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:49.647 456+0 records in 00:14:49.647 456+0 records out 00:14:49.647 233472 bytes (233 kB, 228 KiB) copied, 0.00343108 s, 68.0 MB/s 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:49.647 06:07:20 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:49.647 06:07:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:49.647 06:07:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:49.647 06:07:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.647 06:07:20 -- bdev/nbd_common.sh@51 -- # local i 00:14:49.647 06:07:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.647 06:07:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.907 [2024-06-11 06:07:20.385555] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@41 -- # break 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.907 06:07:20 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:49.907 06:07:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@65 -- # true 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@65 -- # count=0 00:14:50.166 06:07:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:50.166 06:07:20 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:50.166 06:07:20 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:50.166 06:07:20 -- bdev/bdev_raid.sh@111 -- # killprocess 112372 00:14:50.166 06:07:20 -- common/autotest_common.sh@926 -- # '[' -z 112372 ']' 00:14:50.166 06:07:20 -- common/autotest_common.sh@930 -- # kill -0 112372 00:14:50.166 06:07:20 -- common/autotest_common.sh@931 -- # uname 00:14:50.166 06:07:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:50.166 06:07:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112372 00:14:50.166 06:07:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:50.166 killing process with pid 112372 00:14:50.166 06:07:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:50.166 06:07:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112372' 00:14:50.166 06:07:20 -- common/autotest_common.sh@945 -- # kill 112372 00:14:50.166 [2024-06-11 06:07:20.698153] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.166 06:07:20 -- common/autotest_common.sh@950 -- # wait 112372 00:14:50.166 [2024-06-11 06:07:20.698264] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.166 [2024-06-11 06:07:20.698329] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.166 [2024-06-11 06:07:20.698340] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:14:50.426 [2024-06-11 06:07:20.902832] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:51.802 06:07:22 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:51.802 00:14:51.802 real 0m4.797s 00:14:51.802 user 0m5.784s 00:14:51.802 sys 0m1.283s 00:14:51.802 06:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:51.802 ************************************ 00:14:51.802 END TEST raid_function_test_raid0 00:14:51.802 ************************************ 00:14:51.802 06:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:51.802 06:07:22 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:51.802 06:07:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:51.802 06:07:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:51.802 06:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:51.803 ************************************ 00:14:51.803 START TEST raid_function_test_concat 00:14:51.803 ************************************ 00:14:51.803 06:07:22 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@86 -- # raid_pid=112535 00:14:51.803 Process raid pid: 112535 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 112535' 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@88 -- # waitforlisten 112535 /var/tmp/spdk-raid.sock 00:14:51.803 06:07:22 -- common/autotest_common.sh@819 -- # '[' -z 112535 ']' 00:14:51.803 06:07:22 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:51.803 06:07:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:51.803 06:07:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:51.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:51.803 06:07:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:51.803 06:07:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:51.803 06:07:22 -- common/autotest_common.sh@10 -- # set +x 00:14:51.803 [2024-06-11 06:07:22.423354] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:51.803 [2024-06-11 06:07:22.423559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.061 [2024-06-11 06:07:22.608114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.337 [2024-06-11 06:07:22.851880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.620 [2024-06-11 06:07:23.103793] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.878 06:07:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:52.878 06:07:23 -- common/autotest_common.sh@852 -- # return 0 00:14:52.878 06:07:23 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:52.878 06:07:23 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:52.878 06:07:23 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:52.878 06:07:23 -- bdev/bdev_raid.sh@70 -- # cat 00:14:52.878 06:07:23 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:53.137 [2024-06-11 06:07:23.646688] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:53.137 [2024-06-11 06:07:23.648907] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:53.137 [2024-06-11 06:07:23.648972] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:53.137 [2024-06-11 06:07:23.648981] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:53.137 [2024-06-11 06:07:23.649119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:53.137 [2024-06-11 06:07:23.649444] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:53.137 [2024-06-11 06:07:23.649462] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:14:53.137 [2024-06-11 06:07:23.649626] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.137 Base_1 00:14:53.137 Base_2 00:14:53.137 06:07:23 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:53.137 06:07:23 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:53.137 06:07:23 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.396 06:07:23 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:53.396 06:07:23 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:53.396 06:07:23 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@12 -- # local i 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.396 06:07:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:53.654 [2024-06-11 06:07:24.066765] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:53.654 /dev/nbd0 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.654 06:07:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:53.654 06:07:24 -- common/autotest_common.sh@857 -- # local i 00:14:53.654 06:07:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:53.654 06:07:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:53.654 06:07:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:53.654 06:07:24 -- common/autotest_common.sh@861 -- # break 00:14:53.654 06:07:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:53.654 06:07:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:53.654 06:07:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.654 1+0 records in 00:14:53.654 1+0 records out 00:14:53.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196276 s, 20.9 MB/s 00:14:53.654 06:07:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.654 06:07:24 -- common/autotest_common.sh@874 -- # size=4096 00:14:53.654 06:07:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.654 06:07:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:53.654 06:07:24 -- common/autotest_common.sh@877 -- # return 0 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.654 06:07:24 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:53.654 { 00:14:53.654 "nbd_device": "/dev/nbd0", 00:14:53.654 "bdev_name": "raid" 00:14:53.654 } 00:14:53.654 ]' 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:53.654 { 00:14:53.654 "nbd_device": "/dev/nbd0", 00:14:53.654 "bdev_name": "raid" 00:14:53.654 } 00:14:53.654 ]' 00:14:53.654 06:07:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:53.913 06:07:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:53.913 06:07:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:53.913 06:07:24 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:53.913 06:07:24 -- bdev/nbd_common.sh@65 -- # count=1 00:14:53.913 06:07:24 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:53.913 4096+0 records in 00:14:53.913 4096+0 records out 00:14:53.913 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0350292 s, 59.9 MB/s 00:14:53.913 06:07:24 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:54.172 4096+0 records in 00:14:54.172 4096+0 records out 00:14:54.172 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.242101 s, 8.7 MB/s 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:54.172 128+0 records in 00:14:54.172 128+0 records out 00:14:54.172 65536 bytes (66 kB, 64 KiB) copied, 0.00124976 s, 52.4 MB/s 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:54.172 2035+0 records in 00:14:54.172 2035+0 records out 00:14:54.172 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00601686 s, 173 MB/s 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:54.172 456+0 records in 00:14:54.172 456+0 records out 00:14:54.172 233472 bytes (233 kB, 228 KiB) copied, 0.00253665 s, 92.0 MB/s 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:54.172 06:07:24 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:54.172 06:07:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:54.172 06:07:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:54.172 06:07:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:54.172 06:07:24 -- bdev/nbd_common.sh@51 -- # local i 00:14:54.172 06:07:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:54.172 06:07:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:54.431 [2024-06-11 06:07:24.927004] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@41 -- # break 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@45 -- # return 0 00:14:54.431 06:07:24 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:54.431 06:07:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@65 -- # true 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@65 -- # count=0 00:14:54.689 06:07:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:54.689 06:07:25 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:54.689 06:07:25 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:54.689 06:07:25 -- bdev/bdev_raid.sh@111 -- # killprocess 112535 00:14:54.689 06:07:25 -- common/autotest_common.sh@926 -- # '[' -z 112535 ']' 00:14:54.689 06:07:25 -- common/autotest_common.sh@930 -- # kill -0 112535 00:14:54.689 06:07:25 -- common/autotest_common.sh@931 -- # uname 00:14:54.689 06:07:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:54.689 06:07:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112535 00:14:54.689 06:07:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:54.689 killing process with pid 112535 00:14:54.689 06:07:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:54.689 06:07:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112535' 00:14:54.689 06:07:25 -- common/autotest_common.sh@945 -- # kill 112535 00:14:54.689 [2024-06-11 06:07:25.270038] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.689 06:07:25 -- common/autotest_common.sh@950 -- # wait 112535 00:14:54.689 [2024-06-11 06:07:25.270158] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.689 [2024-06-11 06:07:25.270229] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.689 [2024-06-11 06:07:25.270239] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:14:54.949 [2024-06-11 06:07:25.474683] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:56.327 00:14:56.327 real 0m4.508s 00:14:56.327 user 0m5.367s 00:14:56.327 sys 0m1.094s 00:14:56.327 06:07:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.327 06:07:26 -- common/autotest_common.sh@10 -- # set +x 00:14:56.327 ************************************ 00:14:56.327 END TEST raid_function_test_concat 00:14:56.327 ************************************ 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:56.327 06:07:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:56.327 06:07:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.327 06:07:26 -- common/autotest_common.sh@10 -- # set +x 00:14:56.327 ************************************ 00:14:56.327 START TEST raid0_resize_test 00:14:56.327 ************************************ 00:14:56.327 06:07:26 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@301 -- # raid_pid=112693 00:14:56.327 Process raid pid: 112693 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 112693' 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@303 -- # waitforlisten 112693 /var/tmp/spdk-raid.sock 00:14:56.327 06:07:26 -- common/autotest_common.sh@819 -- # '[' -z 112693 ']' 00:14:56.327 06:07:26 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:56.327 06:07:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.327 06:07:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.327 06:07:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.327 06:07:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.327 06:07:26 -- common/autotest_common.sh@10 -- # set +x 00:14:56.586 [2024-06-11 06:07:27.002328] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:14:56.586 [2024-06-11 06:07:27.002554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.586 [2024-06-11 06:07:27.184960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.846 [2024-06-11 06:07:27.437279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.105 [2024-06-11 06:07:27.690416] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.363 06:07:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.363 06:07:27 -- common/autotest_common.sh@852 -- # return 0 00:14:57.363 06:07:27 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:57.622 Base_1 00:14:57.622 06:07:28 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:57.880 Base_2 00:14:57.880 06:07:28 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:57.880 [2024-06-11 06:07:28.479140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:57.880 [2024-06-11 06:07:28.481454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:57.880 [2024-06-11 06:07:28.481535] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:57.881 [2024-06-11 06:07:28.481544] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:57.881 [2024-06-11 06:07:28.481722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:14:57.881 [2024-06-11 06:07:28.482049] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:57.881 [2024-06-11 06:07:28.482066] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:14:57.881 [2024-06-11 06:07:28.482252] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.881 06:07:28 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:58.139 [2024-06-11 06:07:28.651152] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:58.139 [2024-06-11 06:07:28.651184] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:58.139 true 00:14:58.139 06:07:28 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:58.139 06:07:28 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:58.398 [2024-06-11 06:07:28.907355] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.398 06:07:28 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:58.398 06:07:28 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:58.398 06:07:28 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:58.398 06:07:28 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:58.657 [2024-06-11 06:07:29.091176] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:58.657 [2024-06-11 06:07:29.091215] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:58.657 [2024-06-11 06:07:29.091539] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:58.657 [2024-06-11 06:07:29.091711] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:58.657 true 00:14:58.657 06:07:29 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:58.657 06:07:29 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:58.916 [2024-06-11 06:07:29.343411] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.916 06:07:29 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:58.916 06:07:29 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:58.916 06:07:29 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:58.916 06:07:29 -- bdev/bdev_raid.sh@332 -- # killprocess 112693 00:14:58.916 06:07:29 -- common/autotest_common.sh@926 -- # '[' -z 112693 ']' 00:14:58.916 06:07:29 -- common/autotest_common.sh@930 -- # kill -0 112693 00:14:58.916 06:07:29 -- common/autotest_common.sh@931 -- # uname 00:14:58.916 06:07:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.916 06:07:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112693 00:14:58.916 06:07:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:58.916 06:07:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:58.916 06:07:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112693' 00:14:58.916 killing process with pid 112693 00:14:58.916 06:07:29 -- common/autotest_common.sh@945 -- # kill 112693 00:14:58.916 [2024-06-11 06:07:29.384180] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.916 06:07:29 -- common/autotest_common.sh@950 -- # wait 112693 00:14:58.916 [2024-06-11 06:07:29.384276] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.916 [2024-06-11 06:07:29.384336] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.916 [2024-06-11 06:07:29.384345] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:14:58.916 [2024-06-11 06:07:29.384993] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@334 -- # return 0 00:15:00.293 00:15:00.293 real 0m3.850s 00:15:00.293 user 0m5.138s 00:15:00.293 sys 0m0.624s 00:15:00.293 06:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.293 06:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.293 ************************************ 00:15:00.293 END TEST raid0_resize_test 00:15:00.293 ************************************ 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:15:00.293 06:07:30 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:00.293 06:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:00.293 06:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.293 ************************************ 00:15:00.293 START TEST raid_state_function_test 00:15:00.293 ************************************ 00:15:00.293 06:07:30 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@226 -- # raid_pid=112782 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 112782' 00:15:00.293 Process raid pid: 112782 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@228 -- # waitforlisten 112782 /var/tmp/spdk-raid.sock 00:15:00.293 06:07:30 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:00.293 06:07:30 -- common/autotest_common.sh@819 -- # '[' -z 112782 ']' 00:15:00.293 06:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.293 06:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.293 06:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.293 06:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.293 06:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.293 [2024-06-11 06:07:30.921768] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:00.293 [2024-06-11 06:07:30.921998] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.552 [2024-06-11 06:07:31.103964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.811 [2024-06-11 06:07:31.350937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.071 [2024-06-11 06:07:31.603383] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.330 06:07:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.330 06:07:31 -- common/autotest_common.sh@852 -- # return 0 00:15:01.330 06:07:31 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.330 [2024-06-11 06:07:31.963530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.330 [2024-06-11 06:07:31.964314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.330 [2024-06-11 06:07:31.964454] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.330 [2024-06-11 06:07:31.964683] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.589 06:07:31 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:01.589 06:07:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.589 06:07:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.589 06:07:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:01.589 06:07:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.590 06:07:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.590 06:07:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.590 "name": "Existed_Raid", 00:15:01.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.590 "strip_size_kb": 64, 00:15:01.590 "state": "configuring", 00:15:01.590 "raid_level": "raid0", 00:15:01.590 "superblock": false, 00:15:01.590 "num_base_bdevs": 2, 00:15:01.590 "num_base_bdevs_discovered": 0, 00:15:01.590 "num_base_bdevs_operational": 2, 00:15:01.590 "base_bdevs_list": [ 00:15:01.590 { 00:15:01.590 "name": "BaseBdev1", 00:15:01.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.590 "is_configured": false, 00:15:01.590 "data_offset": 0, 00:15:01.590 "data_size": 0 00:15:01.590 }, 00:15:01.590 { 00:15:01.590 "name": "BaseBdev2", 00:15:01.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.590 "is_configured": false, 00:15:01.590 "data_offset": 0, 00:15:01.590 "data_size": 0 00:15:01.590 } 00:15:01.590 ] 00:15:01.590 }' 00:15:01.590 06:07:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.590 06:07:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.157 06:07:32 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.416 [2024-06-11 06:07:32.955651] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.416 [2024-06-11 06:07:32.955906] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:02.416 06:07:32 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:02.674 [2024-06-11 06:07:33.207704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.674 [2024-06-11 06:07:33.208371] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.674 [2024-06-11 06:07:33.208492] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.674 [2024-06-11 06:07:33.208643] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.674 06:07:33 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.933 [2024-06-11 06:07:33.491845] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.933 BaseBdev1 00:15:02.933 06:07:33 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:02.933 06:07:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:02.933 06:07:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:02.933 06:07:33 -- common/autotest_common.sh@889 -- # local i 00:15:02.933 06:07:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:02.933 06:07:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:02.933 06:07:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.192 06:07:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.451 [ 00:15:03.451 { 00:15:03.451 "name": "BaseBdev1", 00:15:03.451 "aliases": [ 00:15:03.451 "0e05e2e7-eb39-44bd-8b8e-68f8ef1e354c" 00:15:03.451 ], 00:15:03.451 "product_name": "Malloc disk", 00:15:03.451 "block_size": 512, 00:15:03.451 "num_blocks": 65536, 00:15:03.451 "uuid": "0e05e2e7-eb39-44bd-8b8e-68f8ef1e354c", 00:15:03.451 "assigned_rate_limits": { 00:15:03.451 "rw_ios_per_sec": 0, 00:15:03.451 "rw_mbytes_per_sec": 0, 00:15:03.451 "r_mbytes_per_sec": 0, 00:15:03.451 "w_mbytes_per_sec": 0 00:15:03.451 }, 00:15:03.451 "claimed": true, 00:15:03.451 "claim_type": "exclusive_write", 00:15:03.451 "zoned": false, 00:15:03.451 "supported_io_types": { 00:15:03.451 "read": true, 00:15:03.451 "write": true, 00:15:03.451 "unmap": true, 00:15:03.451 "write_zeroes": true, 00:15:03.451 "flush": true, 00:15:03.451 "reset": true, 00:15:03.451 "compare": false, 00:15:03.451 "compare_and_write": false, 00:15:03.451 "abort": true, 00:15:03.451 "nvme_admin": false, 00:15:03.451 "nvme_io": false 00:15:03.451 }, 00:15:03.451 "memory_domains": [ 00:15:03.451 { 00:15:03.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.451 "dma_device_type": 2 00:15:03.451 } 00:15:03.451 ], 00:15:03.451 "driver_specific": {} 00:15:03.451 } 00:15:03.451 ] 00:15:03.451 06:07:33 -- common/autotest_common.sh@895 -- # return 0 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.451 06:07:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.451 06:07:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.451 "name": "Existed_Raid", 00:15:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.451 "strip_size_kb": 64, 00:15:03.451 "state": "configuring", 00:15:03.451 "raid_level": "raid0", 00:15:03.451 "superblock": false, 00:15:03.451 "num_base_bdevs": 2, 00:15:03.451 "num_base_bdevs_discovered": 1, 00:15:03.451 "num_base_bdevs_operational": 2, 00:15:03.451 "base_bdevs_list": [ 00:15:03.451 { 00:15:03.451 "name": "BaseBdev1", 00:15:03.451 "uuid": "0e05e2e7-eb39-44bd-8b8e-68f8ef1e354c", 00:15:03.451 "is_configured": true, 00:15:03.451 "data_offset": 0, 00:15:03.451 "data_size": 65536 00:15:03.451 }, 00:15:03.451 { 00:15:03.451 "name": "BaseBdev2", 00:15:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.451 "is_configured": false, 00:15:03.451 "data_offset": 0, 00:15:03.451 "data_size": 0 00:15:03.451 } 00:15:03.451 ] 00:15:03.451 }' 00:15:03.451 06:07:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.451 06:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:04.019 06:07:34 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.277 [2024-06-11 06:07:34.760090] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.277 [2024-06-11 06:07:34.760339] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:04.277 06:07:34 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:04.277 06:07:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:04.537 [2024-06-11 06:07:34.980177] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.537 [2024-06-11 06:07:34.982607] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.537 [2024-06-11 06:07:34.983231] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.537 06:07:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.537 06:07:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.537 "name": "Existed_Raid", 00:15:04.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.537 "strip_size_kb": 64, 00:15:04.537 "state": "configuring", 00:15:04.537 "raid_level": "raid0", 00:15:04.537 "superblock": false, 00:15:04.537 "num_base_bdevs": 2, 00:15:04.537 "num_base_bdevs_discovered": 1, 00:15:04.537 "num_base_bdevs_operational": 2, 00:15:04.537 "base_bdevs_list": [ 00:15:04.537 { 00:15:04.537 "name": "BaseBdev1", 00:15:04.537 "uuid": "0e05e2e7-eb39-44bd-8b8e-68f8ef1e354c", 00:15:04.537 "is_configured": true, 00:15:04.537 "data_offset": 0, 00:15:04.537 "data_size": 65536 00:15:04.537 }, 00:15:04.537 { 00:15:04.537 "name": "BaseBdev2", 00:15:04.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.537 "is_configured": false, 00:15:04.537 "data_offset": 0, 00:15:04.537 "data_size": 0 00:15:04.537 } 00:15:04.537 ] 00:15:04.537 }' 00:15:04.537 06:07:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.537 06:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.104 06:07:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.363 [2024-06-11 06:07:35.962642] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.363 [2024-06-11 06:07:35.962935] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:05.363 [2024-06-11 06:07:35.962978] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:05.363 [2024-06-11 06:07:35.963207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:05.363 [2024-06-11 06:07:35.963701] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:05.363 [2024-06-11 06:07:35.963811] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:05.363 [2024-06-11 06:07:35.964173] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.363 BaseBdev2 00:15:05.363 06:07:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:05.363 06:07:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:05.363 06:07:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:05.363 06:07:35 -- common/autotest_common.sh@889 -- # local i 00:15:05.363 06:07:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:05.363 06:07:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:05.363 06:07:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.622 06:07:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.881 [ 00:15:05.881 { 00:15:05.881 "name": "BaseBdev2", 00:15:05.881 "aliases": [ 00:15:05.881 "a7e7bed1-b667-4e52-842c-43f8d5804a11" 00:15:05.881 ], 00:15:05.881 "product_name": "Malloc disk", 00:15:05.881 "block_size": 512, 00:15:05.881 "num_blocks": 65536, 00:15:05.881 "uuid": "a7e7bed1-b667-4e52-842c-43f8d5804a11", 00:15:05.881 "assigned_rate_limits": { 00:15:05.881 "rw_ios_per_sec": 0, 00:15:05.881 "rw_mbytes_per_sec": 0, 00:15:05.881 "r_mbytes_per_sec": 0, 00:15:05.881 "w_mbytes_per_sec": 0 00:15:05.881 }, 00:15:05.881 "claimed": true, 00:15:05.881 "claim_type": "exclusive_write", 00:15:05.881 "zoned": false, 00:15:05.881 "supported_io_types": { 00:15:05.881 "read": true, 00:15:05.881 "write": true, 00:15:05.881 "unmap": true, 00:15:05.881 "write_zeroes": true, 00:15:05.881 "flush": true, 00:15:05.881 "reset": true, 00:15:05.881 "compare": false, 00:15:05.881 "compare_and_write": false, 00:15:05.881 "abort": true, 00:15:05.881 "nvme_admin": false, 00:15:05.881 "nvme_io": false 00:15:05.881 }, 00:15:05.881 "memory_domains": [ 00:15:05.881 { 00:15:05.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.881 "dma_device_type": 2 00:15:05.881 } 00:15:05.881 ], 00:15:05.881 "driver_specific": {} 00:15:05.881 } 00:15:05.881 ] 00:15:05.881 06:07:36 -- common/autotest_common.sh@895 -- # return 0 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.881 06:07:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.139 06:07:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.139 "name": "Existed_Raid", 00:15:06.139 "uuid": "7e4c4e71-4daf-4939-b2b4-38d3ef88c0b4", 00:15:06.140 "strip_size_kb": 64, 00:15:06.140 "state": "online", 00:15:06.140 "raid_level": "raid0", 00:15:06.140 "superblock": false, 00:15:06.140 "num_base_bdevs": 2, 00:15:06.140 "num_base_bdevs_discovered": 2, 00:15:06.140 "num_base_bdevs_operational": 2, 00:15:06.140 "base_bdevs_list": [ 00:15:06.140 { 00:15:06.140 "name": "BaseBdev1", 00:15:06.140 "uuid": "0e05e2e7-eb39-44bd-8b8e-68f8ef1e354c", 00:15:06.140 "is_configured": true, 00:15:06.140 "data_offset": 0, 00:15:06.140 "data_size": 65536 00:15:06.140 }, 00:15:06.140 { 00:15:06.140 "name": "BaseBdev2", 00:15:06.140 "uuid": "a7e7bed1-b667-4e52-842c-43f8d5804a11", 00:15:06.140 "is_configured": true, 00:15:06.140 "data_offset": 0, 00:15:06.140 "data_size": 65536 00:15:06.140 } 00:15:06.140 ] 00:15:06.140 }' 00:15:06.140 06:07:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.140 06:07:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.708 06:07:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:06.708 [2024-06-11 06:07:37.286960] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.708 [2024-06-11 06:07:37.287185] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.708 [2024-06-11 06:07:37.287404] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.968 "name": "Existed_Raid", 00:15:06.968 "uuid": "7e4c4e71-4daf-4939-b2b4-38d3ef88c0b4", 00:15:06.968 "strip_size_kb": 64, 00:15:06.968 "state": "offline", 00:15:06.968 "raid_level": "raid0", 00:15:06.968 "superblock": false, 00:15:06.968 "num_base_bdevs": 2, 00:15:06.968 "num_base_bdevs_discovered": 1, 00:15:06.968 "num_base_bdevs_operational": 1, 00:15:06.968 "base_bdevs_list": [ 00:15:06.968 { 00:15:06.968 "name": null, 00:15:06.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.968 "is_configured": false, 00:15:06.968 "data_offset": 0, 00:15:06.968 "data_size": 65536 00:15:06.968 }, 00:15:06.968 { 00:15:06.968 "name": "BaseBdev2", 00:15:06.968 "uuid": "a7e7bed1-b667-4e52-842c-43f8d5804a11", 00:15:06.968 "is_configured": true, 00:15:06.968 "data_offset": 0, 00:15:06.968 "data_size": 65536 00:15:06.968 } 00:15:06.968 ] 00:15:06.968 }' 00:15:06.968 06:07:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.968 06:07:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.536 06:07:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:07.536 06:07:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:07.536 06:07:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.536 06:07:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:07.795 06:07:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:07.795 06:07:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:07.795 06:07:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:07.795 [2024-06-11 06:07:38.389340] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.795 [2024-06-11 06:07:38.389545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:08.054 06:07:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:08.054 06:07:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:08.054 06:07:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.055 06:07:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:08.055 06:07:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:08.055 06:07:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:08.055 06:07:38 -- bdev/bdev_raid.sh@287 -- # killprocess 112782 00:15:08.055 06:07:38 -- common/autotest_common.sh@926 -- # '[' -z 112782 ']' 00:15:08.055 06:07:38 -- common/autotest_common.sh@930 -- # kill -0 112782 00:15:08.055 06:07:38 -- common/autotest_common.sh@931 -- # uname 00:15:08.055 06:07:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.055 06:07:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 112782 00:15:08.314 06:07:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.314 06:07:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.314 06:07:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 112782' 00:15:08.314 killing process with pid 112782 00:15:08.314 06:07:38 -- common/autotest_common.sh@945 -- # kill 112782 00:15:08.314 [2024-06-11 06:07:38.717026] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:08.314 06:07:38 -- common/autotest_common.sh@950 -- # wait 112782 00:15:08.314 [2024-06-11 06:07:38.717261] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.694 ************************************ 00:15:09.694 END TEST raid_state_function_test 00:15:09.694 ************************************ 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:09.694 00:15:09.694 real 0m9.263s 00:15:09.694 user 0m15.050s 00:15:09.694 sys 0m1.521s 00:15:09.694 06:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.694 06:07:40 -- common/autotest_common.sh@10 -- # set +x 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:09.694 06:07:40 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:09.694 06:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:09.694 06:07:40 -- common/autotest_common.sh@10 -- # set +x 00:15:09.694 ************************************ 00:15:09.694 START TEST raid_state_function_test_sb 00:15:09.694 ************************************ 00:15:09.694 06:07:40 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@226 -- # raid_pid=113096 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113096' 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:09.694 Process raid pid: 113096 00:15:09.694 06:07:40 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113096 /var/tmp/spdk-raid.sock 00:15:09.694 06:07:40 -- common/autotest_common.sh@819 -- # '[' -z 113096 ']' 00:15:09.694 06:07:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.694 06:07:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:09.694 06:07:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.694 06:07:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:09.694 06:07:40 -- common/autotest_common.sh@10 -- # set +x 00:15:09.694 [2024-06-11 06:07:40.255237] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:09.694 [2024-06-11 06:07:40.255705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.954 [2024-06-11 06:07:40.437621] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.213 [2024-06-11 06:07:40.672478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.472 [2024-06-11 06:07:40.916882] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.732 06:07:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:10.732 06:07:41 -- common/autotest_common.sh@852 -- # return 0 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.732 [2024-06-11 06:07:41.327688] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.732 [2024-06-11 06:07:41.327931] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.732 [2024-06-11 06:07:41.328057] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.732 [2024-06-11 06:07:41.328115] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.732 06:07:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.991 06:07:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:10.991 "name": "Existed_Raid", 00:15:10.991 "uuid": "ee36e4b3-86ba-46fe-98c5-61dd46b1e038", 00:15:10.991 "strip_size_kb": 64, 00:15:10.991 "state": "configuring", 00:15:10.991 "raid_level": "raid0", 00:15:10.991 "superblock": true, 00:15:10.991 "num_base_bdevs": 2, 00:15:10.991 "num_base_bdevs_discovered": 0, 00:15:10.991 "num_base_bdevs_operational": 2, 00:15:10.991 "base_bdevs_list": [ 00:15:10.991 { 00:15:10.991 "name": "BaseBdev1", 00:15:10.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.991 "is_configured": false, 00:15:10.991 "data_offset": 0, 00:15:10.991 "data_size": 0 00:15:10.991 }, 00:15:10.991 { 00:15:10.991 "name": "BaseBdev2", 00:15:10.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.991 "is_configured": false, 00:15:10.991 "data_offset": 0, 00:15:10.991 "data_size": 0 00:15:10.991 } 00:15:10.991 ] 00:15:10.991 }' 00:15:10.991 06:07:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:10.991 06:07:41 -- common/autotest_common.sh@10 -- # set +x 00:15:11.562 06:07:42 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:11.907 [2024-06-11 06:07:42.255707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:11.907 [2024-06-11 06:07:42.255897] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:11.907 06:07:42 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:11.907 [2024-06-11 06:07:42.455829] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:11.907 [2024-06-11 06:07:42.456062] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:11.907 [2024-06-11 06:07:42.456148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.907 [2024-06-11 06:07:42.456206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.907 06:07:42 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:12.166 [2024-06-11 06:07:42.666529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.166 BaseBdev1 00:15:12.166 06:07:42 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:12.166 06:07:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:12.166 06:07:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:12.166 06:07:42 -- common/autotest_common.sh@889 -- # local i 00:15:12.166 06:07:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:12.166 06:07:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:12.166 06:07:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.425 06:07:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:12.704 [ 00:15:12.704 { 00:15:12.704 "name": "BaseBdev1", 00:15:12.704 "aliases": [ 00:15:12.704 "42a0c94b-84ff-43b4-8e57-cd86f9cb264f" 00:15:12.704 ], 00:15:12.704 "product_name": "Malloc disk", 00:15:12.704 "block_size": 512, 00:15:12.704 "num_blocks": 65536, 00:15:12.704 "uuid": "42a0c94b-84ff-43b4-8e57-cd86f9cb264f", 00:15:12.704 "assigned_rate_limits": { 00:15:12.704 "rw_ios_per_sec": 0, 00:15:12.704 "rw_mbytes_per_sec": 0, 00:15:12.704 "r_mbytes_per_sec": 0, 00:15:12.704 "w_mbytes_per_sec": 0 00:15:12.704 }, 00:15:12.704 "claimed": true, 00:15:12.704 "claim_type": "exclusive_write", 00:15:12.704 "zoned": false, 00:15:12.704 "supported_io_types": { 00:15:12.704 "read": true, 00:15:12.704 "write": true, 00:15:12.704 "unmap": true, 00:15:12.704 "write_zeroes": true, 00:15:12.704 "flush": true, 00:15:12.704 "reset": true, 00:15:12.704 "compare": false, 00:15:12.704 "compare_and_write": false, 00:15:12.704 "abort": true, 00:15:12.704 "nvme_admin": false, 00:15:12.704 "nvme_io": false 00:15:12.704 }, 00:15:12.704 "memory_domains": [ 00:15:12.704 { 00:15:12.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.704 "dma_device_type": 2 00:15:12.704 } 00:15:12.704 ], 00:15:12.704 "driver_specific": {} 00:15:12.704 } 00:15:12.704 ] 00:15:12.704 06:07:43 -- common/autotest_common.sh@895 -- # return 0 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.704 "name": "Existed_Raid", 00:15:12.704 "uuid": "8a3ac9ba-7062-473a-8dfb-57890549f579", 00:15:12.704 "strip_size_kb": 64, 00:15:12.704 "state": "configuring", 00:15:12.704 "raid_level": "raid0", 00:15:12.704 "superblock": true, 00:15:12.704 "num_base_bdevs": 2, 00:15:12.704 "num_base_bdevs_discovered": 1, 00:15:12.704 "num_base_bdevs_operational": 2, 00:15:12.704 "base_bdevs_list": [ 00:15:12.704 { 00:15:12.704 "name": "BaseBdev1", 00:15:12.704 "uuid": "42a0c94b-84ff-43b4-8e57-cd86f9cb264f", 00:15:12.704 "is_configured": true, 00:15:12.704 "data_offset": 2048, 00:15:12.704 "data_size": 63488 00:15:12.704 }, 00:15:12.704 { 00:15:12.704 "name": "BaseBdev2", 00:15:12.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.704 "is_configured": false, 00:15:12.704 "data_offset": 0, 00:15:12.704 "data_size": 0 00:15:12.704 } 00:15:12.704 ] 00:15:12.704 }' 00:15:12.704 06:07:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.704 06:07:43 -- common/autotest_common.sh@10 -- # set +x 00:15:13.273 06:07:43 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:13.532 [2024-06-11 06:07:44.114816] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:13.532 [2024-06-11 06:07:44.115070] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:13.532 06:07:44 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:13.532 06:07:44 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:13.792 06:07:44 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.050 BaseBdev1 00:15:14.309 06:07:44 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:14.309 06:07:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:14.309 06:07:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:14.309 06:07:44 -- common/autotest_common.sh@889 -- # local i 00:15:14.309 06:07:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:14.309 06:07:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:14.309 06:07:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:14.309 06:07:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.569 [ 00:15:14.569 { 00:15:14.569 "name": "BaseBdev1", 00:15:14.569 "aliases": [ 00:15:14.569 "cf569568-786a-41d5-ac37-dd34d5e7acd3" 00:15:14.569 ], 00:15:14.569 "product_name": "Malloc disk", 00:15:14.569 "block_size": 512, 00:15:14.569 "num_blocks": 65536, 00:15:14.569 "uuid": "cf569568-786a-41d5-ac37-dd34d5e7acd3", 00:15:14.569 "assigned_rate_limits": { 00:15:14.569 "rw_ios_per_sec": 0, 00:15:14.569 "rw_mbytes_per_sec": 0, 00:15:14.569 "r_mbytes_per_sec": 0, 00:15:14.569 "w_mbytes_per_sec": 0 00:15:14.569 }, 00:15:14.569 "claimed": false, 00:15:14.569 "zoned": false, 00:15:14.569 "supported_io_types": { 00:15:14.569 "read": true, 00:15:14.569 "write": true, 00:15:14.569 "unmap": true, 00:15:14.569 "write_zeroes": true, 00:15:14.569 "flush": true, 00:15:14.569 "reset": true, 00:15:14.569 "compare": false, 00:15:14.569 "compare_and_write": false, 00:15:14.569 "abort": true, 00:15:14.569 "nvme_admin": false, 00:15:14.569 "nvme_io": false 00:15:14.569 }, 00:15:14.569 "memory_domains": [ 00:15:14.569 { 00:15:14.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.569 "dma_device_type": 2 00:15:14.569 } 00:15:14.569 ], 00:15:14.569 "driver_specific": {} 00:15:14.569 } 00:15:14.569 ] 00:15:14.569 06:07:45 -- common/autotest_common.sh@895 -- # return 0 00:15:14.569 06:07:45 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:14.828 [2024-06-11 06:07:45.276596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.828 [2024-06-11 06:07:45.278916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.828 [2024-06-11 06:07:45.279111] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.828 "name": "Existed_Raid", 00:15:14.828 "uuid": "dbeb617e-9e11-4e9b-91db-9b8c8eb512a5", 00:15:14.828 "strip_size_kb": 64, 00:15:14.828 "state": "configuring", 00:15:14.828 "raid_level": "raid0", 00:15:14.828 "superblock": true, 00:15:14.828 "num_base_bdevs": 2, 00:15:14.828 "num_base_bdevs_discovered": 1, 00:15:14.828 "num_base_bdevs_operational": 2, 00:15:14.828 "base_bdevs_list": [ 00:15:14.828 { 00:15:14.828 "name": "BaseBdev1", 00:15:14.828 "uuid": "cf569568-786a-41d5-ac37-dd34d5e7acd3", 00:15:14.828 "is_configured": true, 00:15:14.828 "data_offset": 2048, 00:15:14.828 "data_size": 63488 00:15:14.828 }, 00:15:14.828 { 00:15:14.828 "name": "BaseBdev2", 00:15:14.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.828 "is_configured": false, 00:15:14.828 "data_offset": 0, 00:15:14.828 "data_size": 0 00:15:14.828 } 00:15:14.828 ] 00:15:14.828 }' 00:15:14.828 06:07:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.828 06:07:45 -- common/autotest_common.sh@10 -- # set +x 00:15:15.766 06:07:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.766 [2024-06-11 06:07:46.371909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.766 [2024-06-11 06:07:46.372449] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:15.766 [2024-06-11 06:07:46.372562] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.766 [2024-06-11 06:07:46.372752] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:15.766 [2024-06-11 06:07:46.373155] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:15.766 [2024-06-11 06:07:46.373265] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:15.766 BaseBdev2 00:15:15.766 [2024-06-11 06:07:46.373511] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.766 06:07:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:15.766 06:07:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:15.766 06:07:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:15.766 06:07:46 -- common/autotest_common.sh@889 -- # local i 00:15:15.766 06:07:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:15.766 06:07:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:15.766 06:07:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:16.025 06:07:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.284 [ 00:15:16.284 { 00:15:16.284 "name": "BaseBdev2", 00:15:16.284 "aliases": [ 00:15:16.284 "4929fecc-5481-4174-b2ad-4f8b740cdc13" 00:15:16.284 ], 00:15:16.284 "product_name": "Malloc disk", 00:15:16.284 "block_size": 512, 00:15:16.284 "num_blocks": 65536, 00:15:16.284 "uuid": "4929fecc-5481-4174-b2ad-4f8b740cdc13", 00:15:16.284 "assigned_rate_limits": { 00:15:16.284 "rw_ios_per_sec": 0, 00:15:16.284 "rw_mbytes_per_sec": 0, 00:15:16.284 "r_mbytes_per_sec": 0, 00:15:16.284 "w_mbytes_per_sec": 0 00:15:16.284 }, 00:15:16.284 "claimed": true, 00:15:16.284 "claim_type": "exclusive_write", 00:15:16.284 "zoned": false, 00:15:16.284 "supported_io_types": { 00:15:16.284 "read": true, 00:15:16.284 "write": true, 00:15:16.284 "unmap": true, 00:15:16.284 "write_zeroes": true, 00:15:16.284 "flush": true, 00:15:16.284 "reset": true, 00:15:16.284 "compare": false, 00:15:16.284 "compare_and_write": false, 00:15:16.284 "abort": true, 00:15:16.284 "nvme_admin": false, 00:15:16.284 "nvme_io": false 00:15:16.284 }, 00:15:16.284 "memory_domains": [ 00:15:16.284 { 00:15:16.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.284 "dma_device_type": 2 00:15:16.284 } 00:15:16.284 ], 00:15:16.284 "driver_specific": {} 00:15:16.284 } 00:15:16.284 ] 00:15:16.284 06:07:46 -- common/autotest_common.sh@895 -- # return 0 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.284 06:07:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.543 06:07:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.543 "name": "Existed_Raid", 00:15:16.543 "uuid": "dbeb617e-9e11-4e9b-91db-9b8c8eb512a5", 00:15:16.543 "strip_size_kb": 64, 00:15:16.543 "state": "online", 00:15:16.543 "raid_level": "raid0", 00:15:16.543 "superblock": true, 00:15:16.543 "num_base_bdevs": 2, 00:15:16.543 "num_base_bdevs_discovered": 2, 00:15:16.543 "num_base_bdevs_operational": 2, 00:15:16.543 "base_bdevs_list": [ 00:15:16.543 { 00:15:16.543 "name": "BaseBdev1", 00:15:16.543 "uuid": "cf569568-786a-41d5-ac37-dd34d5e7acd3", 00:15:16.543 "is_configured": true, 00:15:16.543 "data_offset": 2048, 00:15:16.543 "data_size": 63488 00:15:16.543 }, 00:15:16.543 { 00:15:16.543 "name": "BaseBdev2", 00:15:16.543 "uuid": "4929fecc-5481-4174-b2ad-4f8b740cdc13", 00:15:16.543 "is_configured": true, 00:15:16.543 "data_offset": 2048, 00:15:16.543 "data_size": 63488 00:15:16.543 } 00:15:16.543 ] 00:15:16.543 }' 00:15:16.543 06:07:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.543 06:07:47 -- common/autotest_common.sh@10 -- # set +x 00:15:17.111 06:07:47 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:17.370 [2024-06-11 06:07:47.800238] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.370 [2024-06-11 06:07:47.800441] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.370 [2024-06-11 06:07:47.800653] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:17.370 06:07:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.371 06:07:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.629 06:07:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.629 "name": "Existed_Raid", 00:15:17.629 "uuid": "dbeb617e-9e11-4e9b-91db-9b8c8eb512a5", 00:15:17.629 "strip_size_kb": 64, 00:15:17.629 "state": "offline", 00:15:17.629 "raid_level": "raid0", 00:15:17.629 "superblock": true, 00:15:17.629 "num_base_bdevs": 2, 00:15:17.629 "num_base_bdevs_discovered": 1, 00:15:17.629 "num_base_bdevs_operational": 1, 00:15:17.629 "base_bdevs_list": [ 00:15:17.629 { 00:15:17.629 "name": null, 00:15:17.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.629 "is_configured": false, 00:15:17.629 "data_offset": 2048, 00:15:17.629 "data_size": 63488 00:15:17.629 }, 00:15:17.629 { 00:15:17.629 "name": "BaseBdev2", 00:15:17.629 "uuid": "4929fecc-5481-4174-b2ad-4f8b740cdc13", 00:15:17.629 "is_configured": true, 00:15:17.629 "data_offset": 2048, 00:15:17.629 "data_size": 63488 00:15:17.629 } 00:15:17.629 ] 00:15:17.629 }' 00:15:17.629 06:07:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.629 06:07:48 -- common/autotest_common.sh@10 -- # set +x 00:15:18.197 06:07:48 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:18.198 06:07:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:18.198 06:07:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.198 06:07:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:18.457 06:07:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:18.457 06:07:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.457 06:07:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:18.716 [2024-06-11 06:07:49.204187] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.716 [2024-06-11 06:07:49.204417] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:18.716 06:07:49 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:18.716 06:07:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:18.716 06:07:49 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.716 06:07:49 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.976 06:07:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:18.976 06:07:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:18.976 06:07:49 -- bdev/bdev_raid.sh@287 -- # killprocess 113096 00:15:18.976 06:07:49 -- common/autotest_common.sh@926 -- # '[' -z 113096 ']' 00:15:18.976 06:07:49 -- common/autotest_common.sh@930 -- # kill -0 113096 00:15:18.976 06:07:49 -- common/autotest_common.sh@931 -- # uname 00:15:18.976 06:07:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:18.976 06:07:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113096 00:15:18.976 06:07:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:18.976 06:07:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:18.976 06:07:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113096' 00:15:18.976 killing process with pid 113096 00:15:18.976 06:07:49 -- common/autotest_common.sh@945 -- # kill 113096 00:15:18.976 [2024-06-11 06:07:49.598243] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.976 06:07:49 -- common/autotest_common.sh@950 -- # wait 113096 00:15:18.976 [2024-06-11 06:07:49.598496] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.354 ************************************ 00:15:20.354 06:07:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:20.354 00:15:20.354 real 0m10.797s 00:15:20.354 user 0m17.701s 00:15:20.354 sys 0m1.804s 00:15:20.354 06:07:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.354 06:07:50 -- common/autotest_common.sh@10 -- # set +x 00:15:20.354 END TEST raid_state_function_test_sb 00:15:20.354 ************************************ 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:20.613 06:07:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:20.613 06:07:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:20.613 06:07:51 -- common/autotest_common.sh@10 -- # set +x 00:15:20.613 ************************************ 00:15:20.613 START TEST raid_superblock_test 00:15:20.613 ************************************ 00:15:20.613 06:07:51 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=113427 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 113427 /var/tmp/spdk-raid.sock 00:15:20.613 06:07:51 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:20.613 06:07:51 -- common/autotest_common.sh@819 -- # '[' -z 113427 ']' 00:15:20.613 06:07:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:20.613 06:07:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:20.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:20.613 06:07:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:20.613 06:07:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:20.613 06:07:51 -- common/autotest_common.sh@10 -- # set +x 00:15:20.613 [2024-06-11 06:07:51.120744] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:20.613 [2024-06-11 06:07:51.121269] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113427 ] 00:15:20.872 [2024-06-11 06:07:51.304508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.131 [2024-06-11 06:07:51.546851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.389 [2024-06-11 06:07:51.792295] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.389 06:07:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:21.389 06:07:52 -- common/autotest_common.sh@852 -- # return 0 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.389 06:07:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:21.647 malloc1 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:21.906 [2024-06-11 06:07:52.459045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:21.906 [2024-06-11 06:07:52.459345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.906 [2024-06-11 06:07:52.459419] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:21.906 [2024-06-11 06:07:52.459612] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.906 [2024-06-11 06:07:52.462385] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.906 [2024-06-11 06:07:52.462611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:21.906 pt1 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:21.906 06:07:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:22.165 malloc2 00:15:22.165 06:07:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:22.424 [2024-06-11 06:07:53.017625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:22.424 [2024-06-11 06:07:53.017901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.424 [2024-06-11 06:07:53.017998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:22.424 [2024-06-11 06:07:53.018156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.424 [2024-06-11 06:07:53.020728] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.424 [2024-06-11 06:07:53.020921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:22.424 pt2 00:15:22.424 06:07:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:22.424 06:07:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:22.424 06:07:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:22.683 [2024-06-11 06:07:53.189866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:22.683 [2024-06-11 06:07:53.192293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:22.683 [2024-06-11 06:07:53.192607] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:22.683 [2024-06-11 06:07:53.192715] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:22.683 [2024-06-11 06:07:53.192913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:22.683 [2024-06-11 06:07:53.193422] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:22.683 [2024-06-11 06:07:53.193527] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:22.683 [2024-06-11 06:07:53.193802] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.683 06:07:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.942 06:07:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.942 "name": "raid_bdev1", 00:15:22.942 "uuid": "16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa", 00:15:22.942 "strip_size_kb": 64, 00:15:22.942 "state": "online", 00:15:22.942 "raid_level": "raid0", 00:15:22.942 "superblock": true, 00:15:22.942 "num_base_bdevs": 2, 00:15:22.942 "num_base_bdevs_discovered": 2, 00:15:22.942 "num_base_bdevs_operational": 2, 00:15:22.942 "base_bdevs_list": [ 00:15:22.942 { 00:15:22.942 "name": "pt1", 00:15:22.942 "uuid": "ef777e8f-69e8-5ed6-97cd-ea9c99f66646", 00:15:22.942 "is_configured": true, 00:15:22.942 "data_offset": 2048, 00:15:22.942 "data_size": 63488 00:15:22.942 }, 00:15:22.942 { 00:15:22.942 "name": "pt2", 00:15:22.942 "uuid": "d58cc98e-a6a9-5f64-9bfc-97b508e4fe24", 00:15:22.942 "is_configured": true, 00:15:22.942 "data_offset": 2048, 00:15:22.942 "data_size": 63488 00:15:22.942 } 00:15:22.942 ] 00:15:22.942 }' 00:15:22.942 06:07:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.942 06:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.510 06:07:53 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:23.510 06:07:53 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:23.510 [2024-06-11 06:07:54.142232] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.768 06:07:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa 00:15:23.768 06:07:54 -- bdev/bdev_raid.sh@380 -- # '[' -z 16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa ']' 00:15:23.768 06:07:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:23.768 [2024-06-11 06:07:54.318047] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.768 [2024-06-11 06:07:54.318228] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.768 [2024-06-11 06:07:54.318473] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.768 [2024-06-11 06:07:54.318600] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.768 [2024-06-11 06:07:54.318681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:23.768 06:07:54 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.768 06:07:54 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:24.027 06:07:54 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:24.027 06:07:54 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:24.027 06:07:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.027 06:07:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:24.286 06:07:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.286 06:07:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:24.286 06:07:54 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:24.286 06:07:54 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:24.545 06:07:55 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:24.545 06:07:55 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:24.545 06:07:55 -- common/autotest_common.sh@640 -- # local es=0 00:15:24.545 06:07:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:24.545 06:07:55 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.545 06:07:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:24.545 06:07:55 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.545 06:07:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:24.545 06:07:55 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.545 06:07:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:24.545 06:07:55 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.545 06:07:55 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:24.545 06:07:55 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:24.804 [2024-06-11 06:07:55.374254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:24.804 [2024-06-11 06:07:55.376692] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:24.804 [2024-06-11 06:07:55.376908] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:24.804 [2024-06-11 06:07:55.377072] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:24.804 [2024-06-11 06:07:55.377184] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.804 [2024-06-11 06:07:55.377279] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:24.804 request: 00:15:24.804 { 00:15:24.804 "name": "raid_bdev1", 00:15:24.804 "raid_level": "raid0", 00:15:24.804 "base_bdevs": [ 00:15:24.804 "malloc1", 00:15:24.804 "malloc2" 00:15:24.804 ], 00:15:24.804 "superblock": false, 00:15:24.804 "strip_size_kb": 64, 00:15:24.804 "method": "bdev_raid_create", 00:15:24.804 "req_id": 1 00:15:24.804 } 00:15:24.804 Got JSON-RPC error response 00:15:24.804 response: 00:15:24.804 { 00:15:24.804 "code": -17, 00:15:24.804 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:24.804 } 00:15:24.804 06:07:55 -- common/autotest_common.sh@643 -- # es=1 00:15:24.804 06:07:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:24.804 06:07:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:24.804 06:07:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:24.804 06:07:55 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.804 06:07:55 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:25.063 06:07:55 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:25.063 06:07:55 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:25.063 06:07:55 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.322 [2024-06-11 06:07:55.718262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.322 [2024-06-11 06:07:55.718530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.322 [2024-06-11 06:07:55.718604] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:25.322 [2024-06-11 06:07:55.718689] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.322 [2024-06-11 06:07:55.721324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.322 [2024-06-11 06:07:55.721485] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.322 [2024-06-11 06:07:55.721704] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:25.322 [2024-06-11 06:07:55.721832] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:25.322 pt1 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.322 "name": "raid_bdev1", 00:15:25.322 "uuid": "16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa", 00:15:25.322 "strip_size_kb": 64, 00:15:25.322 "state": "configuring", 00:15:25.322 "raid_level": "raid0", 00:15:25.322 "superblock": true, 00:15:25.322 "num_base_bdevs": 2, 00:15:25.322 "num_base_bdevs_discovered": 1, 00:15:25.322 "num_base_bdevs_operational": 2, 00:15:25.322 "base_bdevs_list": [ 00:15:25.322 { 00:15:25.322 "name": "pt1", 00:15:25.322 "uuid": "ef777e8f-69e8-5ed6-97cd-ea9c99f66646", 00:15:25.322 "is_configured": true, 00:15:25.322 "data_offset": 2048, 00:15:25.322 "data_size": 63488 00:15:25.322 }, 00:15:25.322 { 00:15:25.322 "name": null, 00:15:25.322 "uuid": "d58cc98e-a6a9-5f64-9bfc-97b508e4fe24", 00:15:25.322 "is_configured": false, 00:15:25.322 "data_offset": 2048, 00:15:25.322 "data_size": 63488 00:15:25.322 } 00:15:25.322 ] 00:15:25.322 }' 00:15:25.322 06:07:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.322 06:07:55 -- common/autotest_common.sh@10 -- # set +x 00:15:25.890 06:07:56 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:25.890 06:07:56 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:25.890 06:07:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:25.890 06:07:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:26.149 [2024-06-11 06:07:56.670455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:26.149 [2024-06-11 06:07:56.670736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.149 [2024-06-11 06:07:56.670811] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:26.149 [2024-06-11 06:07:56.670941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.149 [2024-06-11 06:07:56.671484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.149 [2024-06-11 06:07:56.671631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:26.149 [2024-06-11 06:07:56.671839] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:26.149 [2024-06-11 06:07:56.671949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:26.149 [2024-06-11 06:07:56.672116] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:26.149 [2024-06-11 06:07:56.672195] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:26.149 [2024-06-11 06:07:56.672356] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:26.149 [2024-06-11 06:07:56.672697] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:26.149 [2024-06-11 06:07:56.672810] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:26.149 [2024-06-11 06:07:56.673029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.149 pt2 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.149 06:07:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.408 06:07:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:26.408 "name": "raid_bdev1", 00:15:26.409 "uuid": "16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa", 00:15:26.409 "strip_size_kb": 64, 00:15:26.409 "state": "online", 00:15:26.409 "raid_level": "raid0", 00:15:26.409 "superblock": true, 00:15:26.409 "num_base_bdevs": 2, 00:15:26.409 "num_base_bdevs_discovered": 2, 00:15:26.409 "num_base_bdevs_operational": 2, 00:15:26.409 "base_bdevs_list": [ 00:15:26.409 { 00:15:26.409 "name": "pt1", 00:15:26.409 "uuid": "ef777e8f-69e8-5ed6-97cd-ea9c99f66646", 00:15:26.409 "is_configured": true, 00:15:26.409 "data_offset": 2048, 00:15:26.409 "data_size": 63488 00:15:26.409 }, 00:15:26.409 { 00:15:26.409 "name": "pt2", 00:15:26.409 "uuid": "d58cc98e-a6a9-5f64-9bfc-97b508e4fe24", 00:15:26.409 "is_configured": true, 00:15:26.409 "data_offset": 2048, 00:15:26.409 "data_size": 63488 00:15:26.409 } 00:15:26.409 ] 00:15:26.409 }' 00:15:26.409 06:07:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:26.409 06:07:56 -- common/autotest_common.sh@10 -- # set +x 00:15:26.977 06:07:57 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:26.977 06:07:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:27.236 [2024-06-11 06:07:57.798826] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.236 06:07:57 -- bdev/bdev_raid.sh@430 -- # '[' 16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa '!=' 16aa0ba4-2ee9-4ad5-9458-1bd13fbab3fa ']' 00:15:27.236 06:07:57 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:27.236 06:07:57 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:27.236 06:07:57 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:27.236 06:07:57 -- bdev/bdev_raid.sh@511 -- # killprocess 113427 00:15:27.236 06:07:57 -- common/autotest_common.sh@926 -- # '[' -z 113427 ']' 00:15:27.236 06:07:57 -- common/autotest_common.sh@930 -- # kill -0 113427 00:15:27.236 06:07:57 -- common/autotest_common.sh@931 -- # uname 00:15:27.236 06:07:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:27.236 06:07:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113427 00:15:27.236 06:07:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:27.236 06:07:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:27.236 06:07:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113427' 00:15:27.236 killing process with pid 113427 00:15:27.236 06:07:57 -- common/autotest_common.sh@945 -- # kill 113427 00:15:27.236 06:07:57 -- common/autotest_common.sh@950 -- # wait 113427 00:15:27.236 [2024-06-11 06:07:57.850944] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.236 [2024-06-11 06:07:57.851025] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.236 [2024-06-11 06:07:57.851241] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.236 [2024-06-11 06:07:57.851283] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:27.495 [2024-06-11 06:07:58.053921] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:28.903 06:07:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:28.903 00:15:28.903 real 0m8.383s 00:15:28.903 user 0m13.387s 00:15:28.903 sys 0m1.380s 00:15:28.903 06:07:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.903 06:07:59 -- common/autotest_common.sh@10 -- # set +x 00:15:28.903 ************************************ 00:15:28.903 END TEST raid_superblock_test 00:15:28.903 ************************************ 00:15:28.903 06:07:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:28.903 06:07:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:28.903 06:07:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:28.903 06:07:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:28.903 06:07:59 -- common/autotest_common.sh@10 -- # set +x 00:15:28.903 ************************************ 00:15:28.904 START TEST raid_state_function_test 00:15:28.904 ************************************ 00:15:28.904 06:07:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=113678 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113678' 00:15:28.904 Process raid pid: 113678 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113678 /var/tmp/spdk-raid.sock 00:15:28.904 06:07:59 -- common/autotest_common.sh@819 -- # '[' -z 113678 ']' 00:15:28.904 06:07:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.904 06:07:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.904 06:07:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.904 06:07:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.904 06:07:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.904 06:07:59 -- common/autotest_common.sh@10 -- # set +x 00:15:29.204 [2024-06-11 06:07:59.575768] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:29.204 [2024-06-11 06:07:59.576864] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.205 [2024-06-11 06:07:59.761008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.464 [2024-06-11 06:08:00.001143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.723 [2024-06-11 06:08:00.252267] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:29.983 06:08:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:29.983 06:08:00 -- common/autotest_common.sh@852 -- # return 0 00:15:29.983 06:08:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:29.983 [2024-06-11 06:08:00.622750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.983 [2024-06-11 06:08:00.623009] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.983 [2024-06-11 06:08:00.623096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.983 [2024-06-11 06:08:00.623148] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.242 06:08:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.501 06:08:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.501 "name": "Existed_Raid", 00:15:30.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.501 "strip_size_kb": 64, 00:15:30.501 "state": "configuring", 00:15:30.501 "raid_level": "concat", 00:15:30.501 "superblock": false, 00:15:30.501 "num_base_bdevs": 2, 00:15:30.501 "num_base_bdevs_discovered": 0, 00:15:30.501 "num_base_bdevs_operational": 2, 00:15:30.501 "base_bdevs_list": [ 00:15:30.501 { 00:15:30.501 "name": "BaseBdev1", 00:15:30.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.501 "is_configured": false, 00:15:30.501 "data_offset": 0, 00:15:30.501 "data_size": 0 00:15:30.501 }, 00:15:30.501 { 00:15:30.501 "name": "BaseBdev2", 00:15:30.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.501 "is_configured": false, 00:15:30.501 "data_offset": 0, 00:15:30.501 "data_size": 0 00:15:30.501 } 00:15:30.501 ] 00:15:30.501 }' 00:15:30.501 06:08:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.501 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:15:31.069 06:08:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.069 [2024-06-11 06:08:01.646804] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.069 [2024-06-11 06:08:01.647008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:31.070 06:08:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.327 [2024-06-11 06:08:01.822873] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.327 [2024-06-11 06:08:01.823124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.327 [2024-06-11 06:08:01.823208] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.327 [2024-06-11 06:08:01.823274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.327 06:08:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:31.585 [2024-06-11 06:08:02.042409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:31.585 BaseBdev1 00:15:31.585 06:08:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:31.585 06:08:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:31.585 06:08:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:31.585 06:08:02 -- common/autotest_common.sh@889 -- # local i 00:15:31.585 06:08:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:31.585 06:08:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:31.585 06:08:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.844 06:08:02 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:31.844 [ 00:15:31.844 { 00:15:31.844 "name": "BaseBdev1", 00:15:31.844 "aliases": [ 00:15:31.844 "8b02f3df-ac6d-479a-8724-85c97cee006d" 00:15:31.844 ], 00:15:31.844 "product_name": "Malloc disk", 00:15:31.844 "block_size": 512, 00:15:31.844 "num_blocks": 65536, 00:15:31.844 "uuid": "8b02f3df-ac6d-479a-8724-85c97cee006d", 00:15:31.844 "assigned_rate_limits": { 00:15:31.844 "rw_ios_per_sec": 0, 00:15:31.844 "rw_mbytes_per_sec": 0, 00:15:31.844 "r_mbytes_per_sec": 0, 00:15:31.844 "w_mbytes_per_sec": 0 00:15:31.844 }, 00:15:31.844 "claimed": true, 00:15:31.844 "claim_type": "exclusive_write", 00:15:31.844 "zoned": false, 00:15:31.844 "supported_io_types": { 00:15:31.844 "read": true, 00:15:31.844 "write": true, 00:15:31.844 "unmap": true, 00:15:31.844 "write_zeroes": true, 00:15:31.844 "flush": true, 00:15:31.844 "reset": true, 00:15:31.844 "compare": false, 00:15:31.844 "compare_and_write": false, 00:15:31.844 "abort": true, 00:15:31.844 "nvme_admin": false, 00:15:31.844 "nvme_io": false 00:15:31.844 }, 00:15:31.844 "memory_domains": [ 00:15:31.844 { 00:15:31.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.844 "dma_device_type": 2 00:15:31.844 } 00:15:31.844 ], 00:15:31.844 "driver_specific": {} 00:15:31.844 } 00:15:31.844 ] 00:15:31.844 06:08:02 -- common/autotest_common.sh@895 -- # return 0 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.844 06:08:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.103 06:08:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.103 "name": "Existed_Raid", 00:15:32.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.103 "strip_size_kb": 64, 00:15:32.103 "state": "configuring", 00:15:32.103 "raid_level": "concat", 00:15:32.103 "superblock": false, 00:15:32.103 "num_base_bdevs": 2, 00:15:32.103 "num_base_bdevs_discovered": 1, 00:15:32.103 "num_base_bdevs_operational": 2, 00:15:32.103 "base_bdevs_list": [ 00:15:32.103 { 00:15:32.103 "name": "BaseBdev1", 00:15:32.103 "uuid": "8b02f3df-ac6d-479a-8724-85c97cee006d", 00:15:32.103 "is_configured": true, 00:15:32.103 "data_offset": 0, 00:15:32.103 "data_size": 65536 00:15:32.103 }, 00:15:32.103 { 00:15:32.103 "name": "BaseBdev2", 00:15:32.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.103 "is_configured": false, 00:15:32.103 "data_offset": 0, 00:15:32.103 "data_size": 0 00:15:32.103 } 00:15:32.103 ] 00:15:32.103 }' 00:15:32.103 06:08:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.103 06:08:02 -- common/autotest_common.sh@10 -- # set +x 00:15:32.671 06:08:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.671 [2024-06-11 06:08:03.302671] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.671 [2024-06-11 06:08:03.302923] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:32.930 [2024-06-11 06:08:03.462739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.930 [2024-06-11 06:08:03.465155] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.930 [2024-06-11 06:08:03.465344] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.930 06:08:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.189 06:08:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.189 "name": "Existed_Raid", 00:15:33.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.189 "strip_size_kb": 64, 00:15:33.189 "state": "configuring", 00:15:33.189 "raid_level": "concat", 00:15:33.189 "superblock": false, 00:15:33.189 "num_base_bdevs": 2, 00:15:33.189 "num_base_bdevs_discovered": 1, 00:15:33.189 "num_base_bdevs_operational": 2, 00:15:33.189 "base_bdevs_list": [ 00:15:33.189 { 00:15:33.189 "name": "BaseBdev1", 00:15:33.189 "uuid": "8b02f3df-ac6d-479a-8724-85c97cee006d", 00:15:33.189 "is_configured": true, 00:15:33.189 "data_offset": 0, 00:15:33.189 "data_size": 65536 00:15:33.189 }, 00:15:33.189 { 00:15:33.189 "name": "BaseBdev2", 00:15:33.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.189 "is_configured": false, 00:15:33.189 "data_offset": 0, 00:15:33.189 "data_size": 0 00:15:33.189 } 00:15:33.189 ] 00:15:33.189 }' 00:15:33.189 06:08:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.189 06:08:03 -- common/autotest_common.sh@10 -- # set +x 00:15:33.757 06:08:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.017 [2024-06-11 06:08:04.433999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.017 [2024-06-11 06:08:04.434322] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:34.017 [2024-06-11 06:08:04.434365] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:34.017 [2024-06-11 06:08:04.434563] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:34.017 [2024-06-11 06:08:04.435020] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:34.017 [2024-06-11 06:08:04.435130] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:34.017 [2024-06-11 06:08:04.435514] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.017 BaseBdev2 00:15:34.017 06:08:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:34.017 06:08:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:34.017 06:08:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:34.017 06:08:04 -- common/autotest_common.sh@889 -- # local i 00:15:34.017 06:08:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:34.017 06:08:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:34.017 06:08:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.277 06:08:04 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.277 [ 00:15:34.277 { 00:15:34.277 "name": "BaseBdev2", 00:15:34.277 "aliases": [ 00:15:34.277 "f5feadc2-bb3e-46ca-a2f6-c3cd165d6f00" 00:15:34.277 ], 00:15:34.277 "product_name": "Malloc disk", 00:15:34.277 "block_size": 512, 00:15:34.277 "num_blocks": 65536, 00:15:34.277 "uuid": "f5feadc2-bb3e-46ca-a2f6-c3cd165d6f00", 00:15:34.277 "assigned_rate_limits": { 00:15:34.277 "rw_ios_per_sec": 0, 00:15:34.277 "rw_mbytes_per_sec": 0, 00:15:34.277 "r_mbytes_per_sec": 0, 00:15:34.277 "w_mbytes_per_sec": 0 00:15:34.277 }, 00:15:34.277 "claimed": true, 00:15:34.277 "claim_type": "exclusive_write", 00:15:34.277 "zoned": false, 00:15:34.277 "supported_io_types": { 00:15:34.277 "read": true, 00:15:34.277 "write": true, 00:15:34.277 "unmap": true, 00:15:34.277 "write_zeroes": true, 00:15:34.277 "flush": true, 00:15:34.277 "reset": true, 00:15:34.277 "compare": false, 00:15:34.277 "compare_and_write": false, 00:15:34.277 "abort": true, 00:15:34.277 "nvme_admin": false, 00:15:34.277 "nvme_io": false 00:15:34.277 }, 00:15:34.277 "memory_domains": [ 00:15:34.277 { 00:15:34.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.277 "dma_device_type": 2 00:15:34.277 } 00:15:34.277 ], 00:15:34.277 "driver_specific": {} 00:15:34.277 } 00:15:34.277 ] 00:15:34.277 06:08:04 -- common/autotest_common.sh@895 -- # return 0 00:15:34.277 06:08:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:34.277 06:08:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:34.277 06:08:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:34.277 06:08:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.277 06:08:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.278 06:08:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.537 06:08:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.537 "name": "Existed_Raid", 00:15:34.537 "uuid": "7f098849-754b-46a9-abae-54c2ed032ec1", 00:15:34.537 "strip_size_kb": 64, 00:15:34.537 "state": "online", 00:15:34.537 "raid_level": "concat", 00:15:34.537 "superblock": false, 00:15:34.537 "num_base_bdevs": 2, 00:15:34.537 "num_base_bdevs_discovered": 2, 00:15:34.537 "num_base_bdevs_operational": 2, 00:15:34.537 "base_bdevs_list": [ 00:15:34.537 { 00:15:34.537 "name": "BaseBdev1", 00:15:34.537 "uuid": "8b02f3df-ac6d-479a-8724-85c97cee006d", 00:15:34.537 "is_configured": true, 00:15:34.537 "data_offset": 0, 00:15:34.537 "data_size": 65536 00:15:34.537 }, 00:15:34.537 { 00:15:34.537 "name": "BaseBdev2", 00:15:34.537 "uuid": "f5feadc2-bb3e-46ca-a2f6-c3cd165d6f00", 00:15:34.537 "is_configured": true, 00:15:34.537 "data_offset": 0, 00:15:34.537 "data_size": 65536 00:15:34.537 } 00:15:34.537 ] 00:15:34.537 }' 00:15:34.537 06:08:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.537 06:08:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.105 06:08:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:35.365 [2024-06-11 06:08:05.926410] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.365 [2024-06-11 06:08:05.926657] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.365 [2024-06-11 06:08:05.926817] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.625 06:08:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.884 06:08:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.884 "name": "Existed_Raid", 00:15:35.884 "uuid": "7f098849-754b-46a9-abae-54c2ed032ec1", 00:15:35.884 "strip_size_kb": 64, 00:15:35.884 "state": "offline", 00:15:35.884 "raid_level": "concat", 00:15:35.884 "superblock": false, 00:15:35.884 "num_base_bdevs": 2, 00:15:35.884 "num_base_bdevs_discovered": 1, 00:15:35.884 "num_base_bdevs_operational": 1, 00:15:35.884 "base_bdevs_list": [ 00:15:35.884 { 00:15:35.884 "name": null, 00:15:35.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.884 "is_configured": false, 00:15:35.884 "data_offset": 0, 00:15:35.884 "data_size": 65536 00:15:35.884 }, 00:15:35.884 { 00:15:35.884 "name": "BaseBdev2", 00:15:35.884 "uuid": "f5feadc2-bb3e-46ca-a2f6-c3cd165d6f00", 00:15:35.884 "is_configured": true, 00:15:35.884 "data_offset": 0, 00:15:35.884 "data_size": 65536 00:15:35.884 } 00:15:35.884 ] 00:15:35.884 }' 00:15:35.884 06:08:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.884 06:08:06 -- common/autotest_common.sh@10 -- # set +x 00:15:36.451 06:08:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:36.451 06:08:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:36.451 06:08:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.451 06:08:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:36.451 06:08:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:36.451 06:08:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.452 06:08:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:36.710 [2024-06-11 06:08:07.298448] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.710 [2024-06-11 06:08:07.298697] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:36.969 06:08:07 -- bdev/bdev_raid.sh@287 -- # killprocess 113678 00:15:36.969 06:08:07 -- common/autotest_common.sh@926 -- # '[' -z 113678 ']' 00:15:36.969 06:08:07 -- common/autotest_common.sh@930 -- # kill -0 113678 00:15:36.969 06:08:07 -- common/autotest_common.sh@931 -- # uname 00:15:37.228 06:08:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:37.228 06:08:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113678 00:15:37.228 06:08:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:37.228 06:08:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:37.228 06:08:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113678' 00:15:37.228 killing process with pid 113678 00:15:37.228 06:08:07 -- common/autotest_common.sh@945 -- # kill 113678 00:15:37.228 06:08:07 -- common/autotest_common.sh@950 -- # wait 113678 00:15:37.228 [2024-06-11 06:08:07.638861] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.228 [2024-06-11 06:08:07.638994] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.606 ************************************ 00:15:38.606 END TEST raid_state_function_test 00:15:38.606 ************************************ 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:38.606 00:15:38.606 real 0m9.524s 00:15:38.606 user 0m15.525s 00:15:38.606 sys 0m1.573s 00:15:38.606 06:08:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.606 06:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:38.606 06:08:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:38.606 06:08:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:38.606 06:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.606 ************************************ 00:15:38.606 START TEST raid_state_function_test_sb 00:15:38.606 ************************************ 00:15:38.606 06:08:09 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:38.606 06:08:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=113987 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 113987' 00:15:38.607 Process raid pid: 113987 00:15:38.607 06:08:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 113987 /var/tmp/spdk-raid.sock 00:15:38.607 06:08:09 -- common/autotest_common.sh@819 -- # '[' -z 113987 ']' 00:15:38.607 06:08:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.607 06:08:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:38.607 06:08:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.607 06:08:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:38.607 06:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.607 [2024-06-11 06:08:09.160215] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:38.607 [2024-06-11 06:08:09.160545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.866 [2024-06-11 06:08:09.325294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.125 [2024-06-11 06:08:09.566035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.385 [2024-06-11 06:08:09.819650] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.644 06:08:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:39.644 06:08:10 -- common/autotest_common.sh@852 -- # return 0 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:39.644 [2024-06-11 06:08:10.266115] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.644 [2024-06-11 06:08:10.266379] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.644 [2024-06-11 06:08:10.266523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.644 [2024-06-11 06:08:10.266582] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.644 06:08:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.903 06:08:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.903 "name": "Existed_Raid", 00:15:39.903 "uuid": "09731693-28f7-496e-919b-5e78f0e73403", 00:15:39.903 "strip_size_kb": 64, 00:15:39.903 "state": "configuring", 00:15:39.903 "raid_level": "concat", 00:15:39.903 "superblock": true, 00:15:39.903 "num_base_bdevs": 2, 00:15:39.903 "num_base_bdevs_discovered": 0, 00:15:39.903 "num_base_bdevs_operational": 2, 00:15:39.903 "base_bdevs_list": [ 00:15:39.903 { 00:15:39.903 "name": "BaseBdev1", 00:15:39.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.903 "is_configured": false, 00:15:39.903 "data_offset": 0, 00:15:39.903 "data_size": 0 00:15:39.903 }, 00:15:39.903 { 00:15:39.903 "name": "BaseBdev2", 00:15:39.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.903 "is_configured": false, 00:15:39.903 "data_offset": 0, 00:15:39.903 "data_size": 0 00:15:39.903 } 00:15:39.903 ] 00:15:39.903 }' 00:15:39.903 06:08:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.903 06:08:10 -- common/autotest_common.sh@10 -- # set +x 00:15:40.469 06:08:11 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.728 [2024-06-11 06:08:11.262171] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.728 [2024-06-11 06:08:11.262370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:40.728 06:08:11 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:40.987 [2024-06-11 06:08:11.438286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:40.987 [2024-06-11 06:08:11.438544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:40.987 [2024-06-11 06:08:11.438642] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.987 [2024-06-11 06:08:11.438700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.987 06:08:11 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.246 [2024-06-11 06:08:11.702873] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.246 BaseBdev1 00:15:41.246 06:08:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:41.246 06:08:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:41.246 06:08:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:41.246 06:08:11 -- common/autotest_common.sh@889 -- # local i 00:15:41.246 06:08:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:41.246 06:08:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:41.246 06:08:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.505 06:08:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.505 [ 00:15:41.505 { 00:15:41.505 "name": "BaseBdev1", 00:15:41.505 "aliases": [ 00:15:41.505 "323ea2cb-1795-4545-9237-c05a5ee9d4a5" 00:15:41.505 ], 00:15:41.505 "product_name": "Malloc disk", 00:15:41.505 "block_size": 512, 00:15:41.505 "num_blocks": 65536, 00:15:41.505 "uuid": "323ea2cb-1795-4545-9237-c05a5ee9d4a5", 00:15:41.505 "assigned_rate_limits": { 00:15:41.505 "rw_ios_per_sec": 0, 00:15:41.505 "rw_mbytes_per_sec": 0, 00:15:41.505 "r_mbytes_per_sec": 0, 00:15:41.505 "w_mbytes_per_sec": 0 00:15:41.505 }, 00:15:41.505 "claimed": true, 00:15:41.505 "claim_type": "exclusive_write", 00:15:41.505 "zoned": false, 00:15:41.505 "supported_io_types": { 00:15:41.505 "read": true, 00:15:41.505 "write": true, 00:15:41.505 "unmap": true, 00:15:41.505 "write_zeroes": true, 00:15:41.505 "flush": true, 00:15:41.505 "reset": true, 00:15:41.505 "compare": false, 00:15:41.505 "compare_and_write": false, 00:15:41.505 "abort": true, 00:15:41.505 "nvme_admin": false, 00:15:41.505 "nvme_io": false 00:15:41.505 }, 00:15:41.505 "memory_domains": [ 00:15:41.505 { 00:15:41.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.505 "dma_device_type": 2 00:15:41.505 } 00:15:41.505 ], 00:15:41.505 "driver_specific": {} 00:15:41.505 } 00:15:41.505 ] 00:15:41.505 06:08:12 -- common/autotest_common.sh@895 -- # return 0 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.505 06:08:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.764 06:08:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.764 "name": "Existed_Raid", 00:15:41.764 "uuid": "e47ce910-641c-4fb3-8a77-9e748a051a3c", 00:15:41.764 "strip_size_kb": 64, 00:15:41.764 "state": "configuring", 00:15:41.764 "raid_level": "concat", 00:15:41.764 "superblock": true, 00:15:41.764 "num_base_bdevs": 2, 00:15:41.764 "num_base_bdevs_discovered": 1, 00:15:41.764 "num_base_bdevs_operational": 2, 00:15:41.764 "base_bdevs_list": [ 00:15:41.764 { 00:15:41.764 "name": "BaseBdev1", 00:15:41.764 "uuid": "323ea2cb-1795-4545-9237-c05a5ee9d4a5", 00:15:41.764 "is_configured": true, 00:15:41.764 "data_offset": 2048, 00:15:41.764 "data_size": 63488 00:15:41.764 }, 00:15:41.764 { 00:15:41.764 "name": "BaseBdev2", 00:15:41.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.764 "is_configured": false, 00:15:41.764 "data_offset": 0, 00:15:41.764 "data_size": 0 00:15:41.764 } 00:15:41.764 ] 00:15:41.764 }' 00:15:41.764 06:08:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.764 06:08:12 -- common/autotest_common.sh@10 -- # set +x 00:15:42.331 06:08:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:42.331 [2024-06-11 06:08:12.975106] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.331 [2024-06-11 06:08:12.975393] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:42.590 06:08:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:42.590 06:08:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:42.849 06:08:13 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:43.129 BaseBdev1 00:15:43.129 06:08:13 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:43.129 06:08:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:43.129 06:08:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:43.129 06:08:13 -- common/autotest_common.sh@889 -- # local i 00:15:43.129 06:08:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:43.129 06:08:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:43.129 06:08:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.389 06:08:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:43.389 [ 00:15:43.389 { 00:15:43.389 "name": "BaseBdev1", 00:15:43.389 "aliases": [ 00:15:43.389 "432302eb-6cf6-4da0-8f59-f3b1960a12cd" 00:15:43.389 ], 00:15:43.389 "product_name": "Malloc disk", 00:15:43.389 "block_size": 512, 00:15:43.389 "num_blocks": 65536, 00:15:43.389 "uuid": "432302eb-6cf6-4da0-8f59-f3b1960a12cd", 00:15:43.389 "assigned_rate_limits": { 00:15:43.389 "rw_ios_per_sec": 0, 00:15:43.389 "rw_mbytes_per_sec": 0, 00:15:43.389 "r_mbytes_per_sec": 0, 00:15:43.389 "w_mbytes_per_sec": 0 00:15:43.389 }, 00:15:43.389 "claimed": false, 00:15:43.389 "zoned": false, 00:15:43.389 "supported_io_types": { 00:15:43.389 "read": true, 00:15:43.389 "write": true, 00:15:43.389 "unmap": true, 00:15:43.389 "write_zeroes": true, 00:15:43.389 "flush": true, 00:15:43.389 "reset": true, 00:15:43.389 "compare": false, 00:15:43.389 "compare_and_write": false, 00:15:43.389 "abort": true, 00:15:43.389 "nvme_admin": false, 00:15:43.389 "nvme_io": false 00:15:43.389 }, 00:15:43.389 "memory_domains": [ 00:15:43.389 { 00:15:43.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.389 "dma_device_type": 2 00:15:43.389 } 00:15:43.389 ], 00:15:43.389 "driver_specific": {} 00:15:43.389 } 00:15:43.389 ] 00:15:43.389 06:08:13 -- common/autotest_common.sh@895 -- # return 0 00:15:43.389 06:08:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:43.648 [2024-06-11 06:08:14.141068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.648 [2024-06-11 06:08:14.143447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.648 [2024-06-11 06:08:14.143624] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.648 06:08:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.907 06:08:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.907 "name": "Existed_Raid", 00:15:43.907 "uuid": "aa7e07eb-bdc8-49d9-b795-241cc5164a25", 00:15:43.907 "strip_size_kb": 64, 00:15:43.907 "state": "configuring", 00:15:43.907 "raid_level": "concat", 00:15:43.907 "superblock": true, 00:15:43.907 "num_base_bdevs": 2, 00:15:43.907 "num_base_bdevs_discovered": 1, 00:15:43.907 "num_base_bdevs_operational": 2, 00:15:43.907 "base_bdevs_list": [ 00:15:43.907 { 00:15:43.907 "name": "BaseBdev1", 00:15:43.907 "uuid": "432302eb-6cf6-4da0-8f59-f3b1960a12cd", 00:15:43.907 "is_configured": true, 00:15:43.907 "data_offset": 2048, 00:15:43.907 "data_size": 63488 00:15:43.907 }, 00:15:43.907 { 00:15:43.907 "name": "BaseBdev2", 00:15:43.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.907 "is_configured": false, 00:15:43.907 "data_offset": 0, 00:15:43.907 "data_size": 0 00:15:43.908 } 00:15:43.908 ] 00:15:43.908 }' 00:15:43.908 06:08:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.908 06:08:14 -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 06:08:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:44.735 [2024-06-11 06:08:15.237321] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.735 [2024-06-11 06:08:15.237841] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:44.735 [2024-06-11 06:08:15.237981] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.735 [2024-06-11 06:08:15.238147] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:44.735 [2024-06-11 06:08:15.238659] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:44.735 [2024-06-11 06:08:15.238699] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:44.735 [2024-06-11 06:08:15.238954] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.735 BaseBdev2 00:15:44.735 06:08:15 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:44.735 06:08:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:44.735 06:08:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:44.735 06:08:15 -- common/autotest_common.sh@889 -- # local i 00:15:44.735 06:08:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:44.735 06:08:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:44.735 06:08:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.994 06:08:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:45.254 [ 00:15:45.254 { 00:15:45.254 "name": "BaseBdev2", 00:15:45.254 "aliases": [ 00:15:45.254 "026985f4-db51-4e9f-9da8-90c70ef82d35" 00:15:45.254 ], 00:15:45.254 "product_name": "Malloc disk", 00:15:45.254 "block_size": 512, 00:15:45.254 "num_blocks": 65536, 00:15:45.254 "uuid": "026985f4-db51-4e9f-9da8-90c70ef82d35", 00:15:45.254 "assigned_rate_limits": { 00:15:45.254 "rw_ios_per_sec": 0, 00:15:45.254 "rw_mbytes_per_sec": 0, 00:15:45.254 "r_mbytes_per_sec": 0, 00:15:45.254 "w_mbytes_per_sec": 0 00:15:45.254 }, 00:15:45.254 "claimed": true, 00:15:45.254 "claim_type": "exclusive_write", 00:15:45.254 "zoned": false, 00:15:45.254 "supported_io_types": { 00:15:45.254 "read": true, 00:15:45.254 "write": true, 00:15:45.254 "unmap": true, 00:15:45.254 "write_zeroes": true, 00:15:45.254 "flush": true, 00:15:45.254 "reset": true, 00:15:45.254 "compare": false, 00:15:45.254 "compare_and_write": false, 00:15:45.254 "abort": true, 00:15:45.254 "nvme_admin": false, 00:15:45.254 "nvme_io": false 00:15:45.254 }, 00:15:45.254 "memory_domains": [ 00:15:45.254 { 00:15:45.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.254 "dma_device_type": 2 00:15:45.254 } 00:15:45.254 ], 00:15:45.254 "driver_specific": {} 00:15:45.254 } 00:15:45.254 ] 00:15:45.254 06:08:15 -- common/autotest_common.sh@895 -- # return 0 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.254 "name": "Existed_Raid", 00:15:45.254 "uuid": "aa7e07eb-bdc8-49d9-b795-241cc5164a25", 00:15:45.254 "strip_size_kb": 64, 00:15:45.254 "state": "online", 00:15:45.254 "raid_level": "concat", 00:15:45.254 "superblock": true, 00:15:45.254 "num_base_bdevs": 2, 00:15:45.254 "num_base_bdevs_discovered": 2, 00:15:45.254 "num_base_bdevs_operational": 2, 00:15:45.254 "base_bdevs_list": [ 00:15:45.254 { 00:15:45.254 "name": "BaseBdev1", 00:15:45.254 "uuid": "432302eb-6cf6-4da0-8f59-f3b1960a12cd", 00:15:45.254 "is_configured": true, 00:15:45.254 "data_offset": 2048, 00:15:45.254 "data_size": 63488 00:15:45.254 }, 00:15:45.254 { 00:15:45.254 "name": "BaseBdev2", 00:15:45.254 "uuid": "026985f4-db51-4e9f-9da8-90c70ef82d35", 00:15:45.254 "is_configured": true, 00:15:45.254 "data_offset": 2048, 00:15:45.254 "data_size": 63488 00:15:45.254 } 00:15:45.254 ] 00:15:45.254 }' 00:15:45.254 06:08:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.254 06:08:15 -- common/autotest_common.sh@10 -- # set +x 00:15:45.823 06:08:16 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.082 [2024-06-11 06:08:16.525674] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.082 [2024-06-11 06:08:16.525850] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.082 [2024-06-11 06:08:16.526002] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.082 06:08:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.342 06:08:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.342 "name": "Existed_Raid", 00:15:46.342 "uuid": "aa7e07eb-bdc8-49d9-b795-241cc5164a25", 00:15:46.342 "strip_size_kb": 64, 00:15:46.342 "state": "offline", 00:15:46.342 "raid_level": "concat", 00:15:46.342 "superblock": true, 00:15:46.342 "num_base_bdevs": 2, 00:15:46.342 "num_base_bdevs_discovered": 1, 00:15:46.342 "num_base_bdevs_operational": 1, 00:15:46.342 "base_bdevs_list": [ 00:15:46.342 { 00:15:46.342 "name": null, 00:15:46.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.342 "is_configured": false, 00:15:46.342 "data_offset": 2048, 00:15:46.342 "data_size": 63488 00:15:46.342 }, 00:15:46.342 { 00:15:46.342 "name": "BaseBdev2", 00:15:46.342 "uuid": "026985f4-db51-4e9f-9da8-90c70ef82d35", 00:15:46.342 "is_configured": true, 00:15:46.342 "data_offset": 2048, 00:15:46.342 "data_size": 63488 00:15:46.342 } 00:15:46.342 ] 00:15:46.342 }' 00:15:46.342 06:08:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.342 06:08:16 -- common/autotest_common.sh@10 -- # set +x 00:15:46.910 06:08:17 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:46.910 06:08:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:46.910 06:08:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.910 06:08:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:47.168 06:08:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:47.168 06:08:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.168 06:08:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:47.427 [2024-06-11 06:08:17.913517] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.427 [2024-06-11 06:08:17.913784] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:47.427 06:08:18 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:47.427 06:08:18 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.427 06:08:18 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:47.427 06:08:18 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.685 06:08:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:47.685 06:08:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:47.685 06:08:18 -- bdev/bdev_raid.sh@287 -- # killprocess 113987 00:15:47.685 06:08:18 -- common/autotest_common.sh@926 -- # '[' -z 113987 ']' 00:15:47.685 06:08:18 -- common/autotest_common.sh@930 -- # kill -0 113987 00:15:47.685 06:08:18 -- common/autotest_common.sh@931 -- # uname 00:15:47.685 06:08:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:47.685 06:08:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113987 00:15:47.685 06:08:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:47.685 06:08:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:47.685 06:08:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113987' 00:15:47.685 killing process with pid 113987 00:15:47.685 06:08:18 -- common/autotest_common.sh@945 -- # kill 113987 00:15:47.685 [2024-06-11 06:08:18.310441] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.685 06:08:18 -- common/autotest_common.sh@950 -- # wait 113987 00:15:47.685 [2024-06-11 06:08:18.310706] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.064 ************************************ 00:15:49.064 END TEST raid_state_function_test_sb 00:15:49.064 06:08:19 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:49.064 00:15:49.064 real 0m10.588s 00:15:49.064 user 0m17.236s 00:15:49.064 sys 0m1.828s 00:15:49.064 06:08:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.064 06:08:19 -- common/autotest_common.sh@10 -- # set +x 00:15:49.064 ************************************ 00:15:49.322 06:08:19 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:49.322 06:08:19 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:49.322 06:08:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:49.322 06:08:19 -- common/autotest_common.sh@10 -- # set +x 00:15:49.322 ************************************ 00:15:49.322 START TEST raid_superblock_test 00:15:49.322 ************************************ 00:15:49.323 06:08:19 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@357 -- # raid_pid=114319 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:49.323 06:08:19 -- bdev/bdev_raid.sh@358 -- # waitforlisten 114319 /var/tmp/spdk-raid.sock 00:15:49.323 06:08:19 -- common/autotest_common.sh@819 -- # '[' -z 114319 ']' 00:15:49.323 06:08:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:49.323 06:08:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:49.323 06:08:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:49.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:49.323 06:08:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:49.323 06:08:19 -- common/autotest_common.sh@10 -- # set +x 00:15:49.323 [2024-06-11 06:08:19.832058] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:49.323 [2024-06-11 06:08:19.832519] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114319 ] 00:15:49.581 [2024-06-11 06:08:20.020073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.839 [2024-06-11 06:08:20.293764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.097 [2024-06-11 06:08:20.541884] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.034 06:08:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:51.034 06:08:21 -- common/autotest_common.sh@852 -- # return 0 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:51.034 malloc1 00:15:51.034 06:08:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.293 [2024-06-11 06:08:21.828323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.293 [2024-06-11 06:08:21.828609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.293 [2024-06-11 06:08:21.828775] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:51.293 [2024-06-11 06:08:21.828941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.293 [2024-06-11 06:08:21.831670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.293 [2024-06-11 06:08:21.831825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.293 pt1 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.293 06:08:21 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:51.552 malloc2 00:15:51.553 06:08:22 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.812 [2024-06-11 06:08:22.245777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.812 [2024-06-11 06:08:22.246021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.812 [2024-06-11 06:08:22.246150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:51.812 [2024-06-11 06:08:22.246284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.812 [2024-06-11 06:08:22.249006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.812 [2024-06-11 06:08:22.249155] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.812 pt2 00:15:51.812 06:08:22 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:51.812 06:08:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.812 06:08:22 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:52.071 [2024-06-11 06:08:22.489976] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.071 [2024-06-11 06:08:22.492356] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.071 [2024-06-11 06:08:22.492693] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:52.071 [2024-06-11 06:08:22.492810] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:52.071 [2024-06-11 06:08:22.493023] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:52.071 [2024-06-11 06:08:22.493485] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:52.071 [2024-06-11 06:08:22.493524] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:52.071 [2024-06-11 06:08:22.493820] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.071 06:08:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.330 06:08:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.330 "name": "raid_bdev1", 00:15:52.330 "uuid": "892091f1-3fce-48d3-843d-27e3c30dff08", 00:15:52.330 "strip_size_kb": 64, 00:15:52.330 "state": "online", 00:15:52.330 "raid_level": "concat", 00:15:52.330 "superblock": true, 00:15:52.330 "num_base_bdevs": 2, 00:15:52.330 "num_base_bdevs_discovered": 2, 00:15:52.330 "num_base_bdevs_operational": 2, 00:15:52.330 "base_bdevs_list": [ 00:15:52.330 { 00:15:52.330 "name": "pt1", 00:15:52.330 "uuid": "dfdd4856-0c84-5d67-80e6-156d039c0099", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 }, 00:15:52.330 { 00:15:52.330 "name": "pt2", 00:15:52.330 "uuid": "632df82a-a93e-57ff-b312-b79f0c220f95", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 } 00:15:52.330 ] 00:15:52.330 }' 00:15:52.330 06:08:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.330 06:08:22 -- common/autotest_common.sh@10 -- # set +x 00:15:52.922 06:08:23 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:52.922 06:08:23 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:52.922 [2024-06-11 06:08:23.414226] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.922 06:08:23 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=892091f1-3fce-48d3-843d-27e3c30dff08 00:15:52.922 06:08:23 -- bdev/bdev_raid.sh@380 -- # '[' -z 892091f1-3fce-48d3-843d-27e3c30dff08 ']' 00:15:52.922 06:08:23 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:53.181 [2024-06-11 06:08:23.682114] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.181 [2024-06-11 06:08:23.682304] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.181 [2024-06-11 06:08:23.682538] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.181 [2024-06-11 06:08:23.682685] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.181 [2024-06-11 06:08:23.682762] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:53.181 06:08:23 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.181 06:08:23 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:53.440 06:08:23 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:53.440 06:08:23 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:53.440 06:08:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.440 06:08:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:53.699 06:08:24 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.699 06:08:24 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:53.699 06:08:24 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:53.699 06:08:24 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:53.958 06:08:24 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:53.958 06:08:24 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:53.958 06:08:24 -- common/autotest_common.sh@640 -- # local es=0 00:15:53.958 06:08:24 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:53.958 06:08:24 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.958 06:08:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:53.958 06:08:24 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.958 06:08:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:53.958 06:08:24 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.958 06:08:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:53.958 06:08:24 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:53.958 06:08:24 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:53.958 06:08:24 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:54.217 [2024-06-11 06:08:24.646301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:54.217 [2024-06-11 06:08:24.648678] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:54.217 [2024-06-11 06:08:24.648897] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:54.217 [2024-06-11 06:08:24.649062] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:54.217 [2024-06-11 06:08:24.649132] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.217 [2024-06-11 06:08:24.649209] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:54.217 request: 00:15:54.217 { 00:15:54.217 "name": "raid_bdev1", 00:15:54.217 "raid_level": "concat", 00:15:54.217 "base_bdevs": [ 00:15:54.217 "malloc1", 00:15:54.217 "malloc2" 00:15:54.217 ], 00:15:54.217 "superblock": false, 00:15:54.217 "strip_size_kb": 64, 00:15:54.217 "method": "bdev_raid_create", 00:15:54.217 "req_id": 1 00:15:54.217 } 00:15:54.217 Got JSON-RPC error response 00:15:54.217 response: 00:15:54.217 { 00:15:54.217 "code": -17, 00:15:54.217 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:54.217 } 00:15:54.217 06:08:24 -- common/autotest_common.sh@643 -- # es=1 00:15:54.217 06:08:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:54.217 06:08:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:54.217 06:08:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:54.217 06:08:24 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.217 06:08:24 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:54.217 06:08:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:54.217 06:08:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:54.217 06:08:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.476 [2024-06-11 06:08:25.054273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.476 [2024-06-11 06:08:25.054587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.476 [2024-06-11 06:08:25.054662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:54.476 [2024-06-11 06:08:25.054760] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.476 [2024-06-11 06:08:25.057384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.476 [2024-06-11 06:08:25.057564] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.476 [2024-06-11 06:08:25.057762] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:54.476 [2024-06-11 06:08:25.057900] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.476 pt1 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.476 06:08:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.735 06:08:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.735 "name": "raid_bdev1", 00:15:54.735 "uuid": "892091f1-3fce-48d3-843d-27e3c30dff08", 00:15:54.735 "strip_size_kb": 64, 00:15:54.735 "state": "configuring", 00:15:54.735 "raid_level": "concat", 00:15:54.735 "superblock": true, 00:15:54.735 "num_base_bdevs": 2, 00:15:54.735 "num_base_bdevs_discovered": 1, 00:15:54.735 "num_base_bdevs_operational": 2, 00:15:54.735 "base_bdevs_list": [ 00:15:54.735 { 00:15:54.735 "name": "pt1", 00:15:54.735 "uuid": "dfdd4856-0c84-5d67-80e6-156d039c0099", 00:15:54.735 "is_configured": true, 00:15:54.735 "data_offset": 2048, 00:15:54.735 "data_size": 63488 00:15:54.735 }, 00:15:54.735 { 00:15:54.735 "name": null, 00:15:54.735 "uuid": "632df82a-a93e-57ff-b312-b79f0c220f95", 00:15:54.735 "is_configured": false, 00:15:54.735 "data_offset": 2048, 00:15:54.735 "data_size": 63488 00:15:54.735 } 00:15:54.735 ] 00:15:54.735 }' 00:15:54.735 06:08:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.735 06:08:25 -- common/autotest_common.sh@10 -- # set +x 00:15:55.303 06:08:25 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:55.303 06:08:25 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:55.303 06:08:25 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.303 06:08:25 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.562 [2024-06-11 06:08:26.014520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.563 [2024-06-11 06:08:26.014826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.563 [2024-06-11 06:08:26.014902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:55.563 [2024-06-11 06:08:26.015001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.563 [2024-06-11 06:08:26.015578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.563 [2024-06-11 06:08:26.015728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.563 [2024-06-11 06:08:26.015943] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:55.563 [2024-06-11 06:08:26.016038] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.563 [2024-06-11 06:08:26.016207] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:55.563 [2024-06-11 06:08:26.016335] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:55.563 [2024-06-11 06:08:26.016508] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:55.563 [2024-06-11 06:08:26.017049] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:55.563 [2024-06-11 06:08:26.017158] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:55.563 [2024-06-11 06:08:26.017371] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.563 pt2 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.563 06:08:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.822 06:08:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.822 "name": "raid_bdev1", 00:15:55.822 "uuid": "892091f1-3fce-48d3-843d-27e3c30dff08", 00:15:55.822 "strip_size_kb": 64, 00:15:55.822 "state": "online", 00:15:55.822 "raid_level": "concat", 00:15:55.822 "superblock": true, 00:15:55.822 "num_base_bdevs": 2, 00:15:55.822 "num_base_bdevs_discovered": 2, 00:15:55.822 "num_base_bdevs_operational": 2, 00:15:55.822 "base_bdevs_list": [ 00:15:55.822 { 00:15:55.822 "name": "pt1", 00:15:55.822 "uuid": "dfdd4856-0c84-5d67-80e6-156d039c0099", 00:15:55.822 "is_configured": true, 00:15:55.822 "data_offset": 2048, 00:15:55.822 "data_size": 63488 00:15:55.822 }, 00:15:55.822 { 00:15:55.822 "name": "pt2", 00:15:55.822 "uuid": "632df82a-a93e-57ff-b312-b79f0c220f95", 00:15:55.822 "is_configured": true, 00:15:55.822 "data_offset": 2048, 00:15:55.822 "data_size": 63488 00:15:55.822 } 00:15:55.822 ] 00:15:55.822 }' 00:15:55.822 06:08:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.822 06:08:26 -- common/autotest_common.sh@10 -- # set +x 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:56.390 [2024-06-11 06:08:26.886824] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@430 -- # '[' 892091f1-3fce-48d3-843d-27e3c30dff08 '!=' 892091f1-3fce-48d3-843d-27e3c30dff08 ']' 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:56.390 06:08:26 -- bdev/bdev_raid.sh@511 -- # killprocess 114319 00:15:56.390 06:08:26 -- common/autotest_common.sh@926 -- # '[' -z 114319 ']' 00:15:56.390 06:08:26 -- common/autotest_common.sh@930 -- # kill -0 114319 00:15:56.390 06:08:26 -- common/autotest_common.sh@931 -- # uname 00:15:56.390 06:08:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:56.390 06:08:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114319 00:15:56.390 killing process with pid 114319 00:15:56.390 06:08:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:56.390 06:08:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:56.390 06:08:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114319' 00:15:56.390 06:08:26 -- common/autotest_common.sh@945 -- # kill 114319 00:15:56.390 [2024-06-11 06:08:26.934919] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:56.390 06:08:26 -- common/autotest_common.sh@950 -- # wait 114319 00:15:56.390 [2024-06-11 06:08:26.935001] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:56.390 [2024-06-11 06:08:26.935054] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:56.390 [2024-06-11 06:08:26.935062] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:56.649 [2024-06-11 06:08:27.136915] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.028 06:08:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:58.029 00:15:58.029 real 0m8.739s 00:15:58.029 user 0m13.576s 00:15:58.029 sys 0m1.432s 00:15:58.029 06:08:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:58.029 ************************************ 00:15:58.029 END TEST raid_superblock_test 00:15:58.029 06:08:28 -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 ************************************ 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:58.029 06:08:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:58.029 06:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:58.029 06:08:28 -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 ************************************ 00:15:58.029 START TEST raid_state_function_test 00:15:58.029 ************************************ 00:15:58.029 06:08:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=114572 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114572' 00:15:58.029 Process raid pid: 114572 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:58.029 06:08:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114572 /var/tmp/spdk-raid.sock 00:15:58.029 06:08:28 -- common/autotest_common.sh@819 -- # '[' -z 114572 ']' 00:15:58.029 06:08:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:58.029 06:08:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:58.029 06:08:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:58.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:58.029 06:08:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:58.029 06:08:28 -- common/autotest_common.sh@10 -- # set +x 00:15:58.029 [2024-06-11 06:08:28.651339] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:15:58.029 [2024-06-11 06:08:28.652367] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.288 [2024-06-11 06:08:28.833872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.553 [2024-06-11 06:08:29.073559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.815 [2024-06-11 06:08:29.322652] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.117 06:08:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:59.117 06:08:29 -- common/autotest_common.sh@852 -- # return 0 00:15:59.117 06:08:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:59.388 [2024-06-11 06:08:29.772391] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.388 [2024-06-11 06:08:29.772653] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.388 [2024-06-11 06:08:29.772780] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.388 [2024-06-11 06:08:29.772922] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.388 "name": "Existed_Raid", 00:15:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.388 "strip_size_kb": 0, 00:15:59.388 "state": "configuring", 00:15:59.388 "raid_level": "raid1", 00:15:59.388 "superblock": false, 00:15:59.388 "num_base_bdevs": 2, 00:15:59.388 "num_base_bdevs_discovered": 0, 00:15:59.388 "num_base_bdevs_operational": 2, 00:15:59.388 "base_bdevs_list": [ 00:15:59.388 { 00:15:59.388 "name": "BaseBdev1", 00:15:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.388 "is_configured": false, 00:15:59.388 "data_offset": 0, 00:15:59.388 "data_size": 0 00:15:59.388 }, 00:15:59.388 { 00:15:59.388 "name": "BaseBdev2", 00:15:59.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.388 "is_configured": false, 00:15:59.388 "data_offset": 0, 00:15:59.388 "data_size": 0 00:15:59.388 } 00:15:59.388 ] 00:15:59.388 }' 00:15:59.388 06:08:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.388 06:08:29 -- common/autotest_common.sh@10 -- # set +x 00:15:59.956 06:08:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.215 [2024-06-11 06:08:30.728426] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.215 [2024-06-11 06:08:30.728624] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:00.215 06:08:30 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:00.474 [2024-06-11 06:08:30.888452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.474 [2024-06-11 06:08:30.888718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.474 [2024-06-11 06:08:30.888816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.474 [2024-06-11 06:08:30.888878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.474 06:08:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.474 [2024-06-11 06:08:31.099766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.474 BaseBdev1 00:16:00.734 06:08:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:00.734 06:08:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:00.734 06:08:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:00.734 06:08:31 -- common/autotest_common.sh@889 -- # local i 00:16:00.734 06:08:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:00.734 06:08:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:00.734 06:08:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.734 06:08:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.993 [ 00:16:00.993 { 00:16:00.993 "name": "BaseBdev1", 00:16:00.993 "aliases": [ 00:16:00.993 "b22210a4-a3a2-44ed-a5a8-e0562b0573af" 00:16:00.993 ], 00:16:00.993 "product_name": "Malloc disk", 00:16:00.993 "block_size": 512, 00:16:00.993 "num_blocks": 65536, 00:16:00.993 "uuid": "b22210a4-a3a2-44ed-a5a8-e0562b0573af", 00:16:00.993 "assigned_rate_limits": { 00:16:00.993 "rw_ios_per_sec": 0, 00:16:00.993 "rw_mbytes_per_sec": 0, 00:16:00.993 "r_mbytes_per_sec": 0, 00:16:00.993 "w_mbytes_per_sec": 0 00:16:00.993 }, 00:16:00.993 "claimed": true, 00:16:00.993 "claim_type": "exclusive_write", 00:16:00.993 "zoned": false, 00:16:00.993 "supported_io_types": { 00:16:00.993 "read": true, 00:16:00.993 "write": true, 00:16:00.993 "unmap": true, 00:16:00.993 "write_zeroes": true, 00:16:00.993 "flush": true, 00:16:00.993 "reset": true, 00:16:00.993 "compare": false, 00:16:00.993 "compare_and_write": false, 00:16:00.993 "abort": true, 00:16:00.993 "nvme_admin": false, 00:16:00.993 "nvme_io": false 00:16:00.993 }, 00:16:00.993 "memory_domains": [ 00:16:00.993 { 00:16:00.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.993 "dma_device_type": 2 00:16:00.993 } 00:16:00.993 ], 00:16:00.993 "driver_specific": {} 00:16:00.993 } 00:16:00.993 ] 00:16:00.993 06:08:31 -- common/autotest_common.sh@895 -- # return 0 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.993 06:08:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.252 06:08:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.252 "name": "Existed_Raid", 00:16:01.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.252 "strip_size_kb": 0, 00:16:01.252 "state": "configuring", 00:16:01.252 "raid_level": "raid1", 00:16:01.252 "superblock": false, 00:16:01.252 "num_base_bdevs": 2, 00:16:01.252 "num_base_bdevs_discovered": 1, 00:16:01.252 "num_base_bdevs_operational": 2, 00:16:01.252 "base_bdevs_list": [ 00:16:01.252 { 00:16:01.252 "name": "BaseBdev1", 00:16:01.252 "uuid": "b22210a4-a3a2-44ed-a5a8-e0562b0573af", 00:16:01.252 "is_configured": true, 00:16:01.252 "data_offset": 0, 00:16:01.252 "data_size": 65536 00:16:01.252 }, 00:16:01.252 { 00:16:01.252 "name": "BaseBdev2", 00:16:01.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.252 "is_configured": false, 00:16:01.252 "data_offset": 0, 00:16:01.252 "data_size": 0 00:16:01.252 } 00:16:01.252 ] 00:16:01.252 }' 00:16:01.252 06:08:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.252 06:08:31 -- common/autotest_common.sh@10 -- # set +x 00:16:01.820 06:08:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.078 [2024-06-11 06:08:32.520050] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.078 [2024-06-11 06:08:32.520282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:02.078 06:08:32 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:02.078 06:08:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:02.337 [2024-06-11 06:08:32.788188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.337 [2024-06-11 06:08:32.790682] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.337 [2024-06-11 06:08:32.790850] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.337 06:08:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.597 06:08:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:02.597 "name": "Existed_Raid", 00:16:02.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.597 "strip_size_kb": 0, 00:16:02.597 "state": "configuring", 00:16:02.597 "raid_level": "raid1", 00:16:02.597 "superblock": false, 00:16:02.597 "num_base_bdevs": 2, 00:16:02.597 "num_base_bdevs_discovered": 1, 00:16:02.597 "num_base_bdevs_operational": 2, 00:16:02.597 "base_bdevs_list": [ 00:16:02.597 { 00:16:02.597 "name": "BaseBdev1", 00:16:02.597 "uuid": "b22210a4-a3a2-44ed-a5a8-e0562b0573af", 00:16:02.597 "is_configured": true, 00:16:02.597 "data_offset": 0, 00:16:02.597 "data_size": 65536 00:16:02.597 }, 00:16:02.597 { 00:16:02.598 "name": "BaseBdev2", 00:16:02.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.598 "is_configured": false, 00:16:02.598 "data_offset": 0, 00:16:02.598 "data_size": 0 00:16:02.598 } 00:16:02.598 ] 00:16:02.598 }' 00:16:02.598 06:08:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:02.598 06:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.166 06:08:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.166 [2024-06-11 06:08:33.771436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.166 [2024-06-11 06:08:33.771657] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:03.166 [2024-06-11 06:08:33.771698] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:03.166 [2024-06-11 06:08:33.771897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:03.166 [2024-06-11 06:08:33.772345] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:03.166 [2024-06-11 06:08:33.772450] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:03.166 [2024-06-11 06:08:33.772874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.166 BaseBdev2 00:16:03.166 06:08:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:03.166 06:08:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:03.166 06:08:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:03.166 06:08:33 -- common/autotest_common.sh@889 -- # local i 00:16:03.166 06:08:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:03.166 06:08:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:03.166 06:08:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.425 06:08:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.684 [ 00:16:03.685 { 00:16:03.685 "name": "BaseBdev2", 00:16:03.685 "aliases": [ 00:16:03.685 "5547820a-c6ac-4ffc-a4ed-250c2535664d" 00:16:03.685 ], 00:16:03.685 "product_name": "Malloc disk", 00:16:03.685 "block_size": 512, 00:16:03.685 "num_blocks": 65536, 00:16:03.685 "uuid": "5547820a-c6ac-4ffc-a4ed-250c2535664d", 00:16:03.685 "assigned_rate_limits": { 00:16:03.685 "rw_ios_per_sec": 0, 00:16:03.685 "rw_mbytes_per_sec": 0, 00:16:03.685 "r_mbytes_per_sec": 0, 00:16:03.685 "w_mbytes_per_sec": 0 00:16:03.685 }, 00:16:03.685 "claimed": true, 00:16:03.685 "claim_type": "exclusive_write", 00:16:03.685 "zoned": false, 00:16:03.685 "supported_io_types": { 00:16:03.685 "read": true, 00:16:03.685 "write": true, 00:16:03.685 "unmap": true, 00:16:03.685 "write_zeroes": true, 00:16:03.685 "flush": true, 00:16:03.685 "reset": true, 00:16:03.685 "compare": false, 00:16:03.685 "compare_and_write": false, 00:16:03.685 "abort": true, 00:16:03.685 "nvme_admin": false, 00:16:03.685 "nvme_io": false 00:16:03.685 }, 00:16:03.685 "memory_domains": [ 00:16:03.685 { 00:16:03.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.685 "dma_device_type": 2 00:16:03.685 } 00:16:03.685 ], 00:16:03.685 "driver_specific": {} 00:16:03.685 } 00:16:03.685 ] 00:16:03.685 06:08:34 -- common/autotest_common.sh@895 -- # return 0 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.685 06:08:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.944 06:08:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.944 "name": "Existed_Raid", 00:16:03.944 "uuid": "2c721519-a7ca-49cb-b70b-6bfe9d6fe662", 00:16:03.944 "strip_size_kb": 0, 00:16:03.944 "state": "online", 00:16:03.944 "raid_level": "raid1", 00:16:03.944 "superblock": false, 00:16:03.944 "num_base_bdevs": 2, 00:16:03.944 "num_base_bdevs_discovered": 2, 00:16:03.944 "num_base_bdevs_operational": 2, 00:16:03.944 "base_bdevs_list": [ 00:16:03.944 { 00:16:03.944 "name": "BaseBdev1", 00:16:03.944 "uuid": "b22210a4-a3a2-44ed-a5a8-e0562b0573af", 00:16:03.944 "is_configured": true, 00:16:03.944 "data_offset": 0, 00:16:03.944 "data_size": 65536 00:16:03.944 }, 00:16:03.944 { 00:16:03.944 "name": "BaseBdev2", 00:16:03.944 "uuid": "5547820a-c6ac-4ffc-a4ed-250c2535664d", 00:16:03.944 "is_configured": true, 00:16:03.944 "data_offset": 0, 00:16:03.944 "data_size": 65536 00:16:03.944 } 00:16:03.944 ] 00:16:03.944 }' 00:16:03.944 06:08:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.944 06:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:04.512 06:08:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:04.772 [2024-06-11 06:08:35.173368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.772 06:08:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.031 06:08:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.031 "name": "Existed_Raid", 00:16:05.031 "uuid": "2c721519-a7ca-49cb-b70b-6bfe9d6fe662", 00:16:05.031 "strip_size_kb": 0, 00:16:05.031 "state": "online", 00:16:05.031 "raid_level": "raid1", 00:16:05.031 "superblock": false, 00:16:05.031 "num_base_bdevs": 2, 00:16:05.031 "num_base_bdevs_discovered": 1, 00:16:05.031 "num_base_bdevs_operational": 1, 00:16:05.031 "base_bdevs_list": [ 00:16:05.031 { 00:16:05.031 "name": null, 00:16:05.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.031 "is_configured": false, 00:16:05.031 "data_offset": 0, 00:16:05.031 "data_size": 65536 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "name": "BaseBdev2", 00:16:05.031 "uuid": "5547820a-c6ac-4ffc-a4ed-250c2535664d", 00:16:05.031 "is_configured": true, 00:16:05.031 "data_offset": 0, 00:16:05.031 "data_size": 65536 00:16:05.031 } 00:16:05.031 ] 00:16:05.031 }' 00:16:05.031 06:08:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.031 06:08:35 -- common/autotest_common.sh@10 -- # set +x 00:16:05.599 06:08:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:05.599 06:08:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.599 06:08:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.599 06:08:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:05.858 06:08:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:05.858 06:08:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:05.858 06:08:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:06.117 [2024-06-11 06:08:36.612172] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:06.117 [2024-06-11 06:08:36.612369] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.117 [2024-06-11 06:08:36.612620] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.117 [2024-06-11 06:08:36.715990] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.117 [2024-06-11 06:08:36.716268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:06.117 06:08:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:06.117 06:08:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:06.117 06:08:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.117 06:08:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:06.376 06:08:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:06.376 06:08:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:06.376 06:08:36 -- bdev/bdev_raid.sh@287 -- # killprocess 114572 00:16:06.376 06:08:36 -- common/autotest_common.sh@926 -- # '[' -z 114572 ']' 00:16:06.376 06:08:36 -- common/autotest_common.sh@930 -- # kill -0 114572 00:16:06.376 06:08:36 -- common/autotest_common.sh@931 -- # uname 00:16:06.376 06:08:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.376 06:08:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114572 00:16:06.376 killing process with pid 114572 00:16:06.376 06:08:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:06.376 06:08:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:06.376 06:08:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114572' 00:16:06.376 06:08:37 -- common/autotest_common.sh@945 -- # kill 114572 00:16:06.376 06:08:37 -- common/autotest_common.sh@950 -- # wait 114572 00:16:06.376 [2024-06-11 06:08:37.009209] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.376 [2024-06-11 06:08:37.009345] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.756 ************************************ 00:16:07.756 END TEST raid_state_function_test 00:16:07.756 ************************************ 00:16:07.756 06:08:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:07.756 00:16:07.756 real 0m9.805s 00:16:07.756 user 0m16.033s 00:16:07.756 sys 0m1.631s 00:16:07.756 06:08:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.756 06:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:08.015 06:08:38 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:08.015 06:08:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:08.015 06:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 ************************************ 00:16:08.015 START TEST raid_state_function_test_sb 00:16:08.015 ************************************ 00:16:08.015 06:08:38 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@226 -- # raid_pid=114893 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:08.015 Process raid pid: 114893 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114893' 00:16:08.015 06:08:38 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114893 /var/tmp/spdk-raid.sock 00:16:08.015 06:08:38 -- common/autotest_common.sh@819 -- # '[' -z 114893 ']' 00:16:08.015 06:08:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.015 06:08:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:08.015 06:08:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.015 06:08:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:08.015 06:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:08.015 [2024-06-11 06:08:38.510676] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:08.015 [2024-06-11 06:08:38.511029] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.274 [2024-06-11 06:08:38.676229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.274 [2024-06-11 06:08:38.919086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.534 [2024-06-11 06:08:39.165353] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.101 06:08:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:09.101 06:08:39 -- common/autotest_common.sh@852 -- # return 0 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:09.101 [2024-06-11 06:08:39.683362] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:09.101 [2024-06-11 06:08:39.683627] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:09.101 [2024-06-11 06:08:39.683789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.101 [2024-06-11 06:08:39.683885] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.101 06:08:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.365 06:08:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.365 "name": "Existed_Raid", 00:16:09.365 "uuid": "023e2fb1-9a00-4c94-a625-598d967c560f", 00:16:09.365 "strip_size_kb": 0, 00:16:09.365 "state": "configuring", 00:16:09.365 "raid_level": "raid1", 00:16:09.365 "superblock": true, 00:16:09.365 "num_base_bdevs": 2, 00:16:09.365 "num_base_bdevs_discovered": 0, 00:16:09.365 "num_base_bdevs_operational": 2, 00:16:09.365 "base_bdevs_list": [ 00:16:09.365 { 00:16:09.365 "name": "BaseBdev1", 00:16:09.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.365 "is_configured": false, 00:16:09.365 "data_offset": 0, 00:16:09.365 "data_size": 0 00:16:09.365 }, 00:16:09.365 { 00:16:09.365 "name": "BaseBdev2", 00:16:09.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.365 "is_configured": false, 00:16:09.365 "data_offset": 0, 00:16:09.365 "data_size": 0 00:16:09.365 } 00:16:09.365 ] 00:16:09.365 }' 00:16:09.365 06:08:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.365 06:08:39 -- common/autotest_common.sh@10 -- # set +x 00:16:09.933 06:08:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.192 [2024-06-11 06:08:40.631404] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.192 [2024-06-11 06:08:40.631595] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:10.192 06:08:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:10.192 [2024-06-11 06:08:40.807507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:10.192 [2024-06-11 06:08:40.807754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:10.192 [2024-06-11 06:08:40.807841] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:10.192 [2024-06-11 06:08:40.807897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:10.192 06:08:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:10.759 [2024-06-11 06:08:41.099534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.759 BaseBdev1 00:16:10.759 06:08:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:10.759 06:08:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:10.759 06:08:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:10.759 06:08:41 -- common/autotest_common.sh@889 -- # local i 00:16:10.759 06:08:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:10.759 06:08:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:10.759 06:08:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.759 06:08:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.018 [ 00:16:11.018 { 00:16:11.018 "name": "BaseBdev1", 00:16:11.018 "aliases": [ 00:16:11.018 "fbdf3fad-244a-42b3-a934-242bffa7c1e0" 00:16:11.018 ], 00:16:11.018 "product_name": "Malloc disk", 00:16:11.018 "block_size": 512, 00:16:11.018 "num_blocks": 65536, 00:16:11.018 "uuid": "fbdf3fad-244a-42b3-a934-242bffa7c1e0", 00:16:11.018 "assigned_rate_limits": { 00:16:11.018 "rw_ios_per_sec": 0, 00:16:11.018 "rw_mbytes_per_sec": 0, 00:16:11.018 "r_mbytes_per_sec": 0, 00:16:11.018 "w_mbytes_per_sec": 0 00:16:11.018 }, 00:16:11.018 "claimed": true, 00:16:11.018 "claim_type": "exclusive_write", 00:16:11.018 "zoned": false, 00:16:11.018 "supported_io_types": { 00:16:11.018 "read": true, 00:16:11.018 "write": true, 00:16:11.018 "unmap": true, 00:16:11.018 "write_zeroes": true, 00:16:11.018 "flush": true, 00:16:11.018 "reset": true, 00:16:11.018 "compare": false, 00:16:11.018 "compare_and_write": false, 00:16:11.018 "abort": true, 00:16:11.018 "nvme_admin": false, 00:16:11.018 "nvme_io": false 00:16:11.018 }, 00:16:11.018 "memory_domains": [ 00:16:11.018 { 00:16:11.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.018 "dma_device_type": 2 00:16:11.018 } 00:16:11.018 ], 00:16:11.018 "driver_specific": {} 00:16:11.018 } 00:16:11.018 ] 00:16:11.018 06:08:41 -- common/autotest_common.sh@895 -- # return 0 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.018 06:08:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.277 06:08:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.277 "name": "Existed_Raid", 00:16:11.277 "uuid": "3452beca-5f99-4f79-980e-bc373e253bf2", 00:16:11.277 "strip_size_kb": 0, 00:16:11.277 "state": "configuring", 00:16:11.277 "raid_level": "raid1", 00:16:11.277 "superblock": true, 00:16:11.277 "num_base_bdevs": 2, 00:16:11.277 "num_base_bdevs_discovered": 1, 00:16:11.277 "num_base_bdevs_operational": 2, 00:16:11.277 "base_bdevs_list": [ 00:16:11.277 { 00:16:11.277 "name": "BaseBdev1", 00:16:11.277 "uuid": "fbdf3fad-244a-42b3-a934-242bffa7c1e0", 00:16:11.277 "is_configured": true, 00:16:11.277 "data_offset": 2048, 00:16:11.277 "data_size": 63488 00:16:11.277 }, 00:16:11.277 { 00:16:11.277 "name": "BaseBdev2", 00:16:11.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.277 "is_configured": false, 00:16:11.277 "data_offset": 0, 00:16:11.277 "data_size": 0 00:16:11.277 } 00:16:11.277 ] 00:16:11.277 }' 00:16:11.277 06:08:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.277 06:08:41 -- common/autotest_common.sh@10 -- # set +x 00:16:11.843 06:08:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:12.102 [2024-06-11 06:08:42.667817] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.102 [2024-06-11 06:08:42.668064] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:12.102 06:08:42 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:12.102 06:08:42 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:12.669 06:08:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:12.669 BaseBdev1 00:16:12.669 06:08:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:12.669 06:08:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:12.669 06:08:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:12.669 06:08:43 -- common/autotest_common.sh@889 -- # local i 00:16:12.669 06:08:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:12.669 06:08:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:12.669 06:08:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.929 06:08:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:13.188 [ 00:16:13.188 { 00:16:13.188 "name": "BaseBdev1", 00:16:13.188 "aliases": [ 00:16:13.188 "425ad61f-e3eb-46d2-b6df-a83ecb4c9f4b" 00:16:13.188 ], 00:16:13.188 "product_name": "Malloc disk", 00:16:13.188 "block_size": 512, 00:16:13.188 "num_blocks": 65536, 00:16:13.188 "uuid": "425ad61f-e3eb-46d2-b6df-a83ecb4c9f4b", 00:16:13.188 "assigned_rate_limits": { 00:16:13.188 "rw_ios_per_sec": 0, 00:16:13.188 "rw_mbytes_per_sec": 0, 00:16:13.188 "r_mbytes_per_sec": 0, 00:16:13.188 "w_mbytes_per_sec": 0 00:16:13.188 }, 00:16:13.188 "claimed": false, 00:16:13.188 "zoned": false, 00:16:13.188 "supported_io_types": { 00:16:13.188 "read": true, 00:16:13.188 "write": true, 00:16:13.188 "unmap": true, 00:16:13.188 "write_zeroes": true, 00:16:13.188 "flush": true, 00:16:13.188 "reset": true, 00:16:13.188 "compare": false, 00:16:13.188 "compare_and_write": false, 00:16:13.188 "abort": true, 00:16:13.188 "nvme_admin": false, 00:16:13.188 "nvme_io": false 00:16:13.188 }, 00:16:13.188 "memory_domains": [ 00:16:13.188 { 00:16:13.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.188 "dma_device_type": 2 00:16:13.188 } 00:16:13.188 ], 00:16:13.188 "driver_specific": {} 00:16:13.188 } 00:16:13.188 ] 00:16:13.188 06:08:43 -- common/autotest_common.sh@895 -- # return 0 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:13.188 [2024-06-11 06:08:43.808775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.188 [2024-06-11 06:08:43.811221] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.188 [2024-06-11 06:08:43.811402] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.188 06:08:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.447 06:08:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.447 06:08:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.447 06:08:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.447 "name": "Existed_Raid", 00:16:13.447 "uuid": "4c4e0c4b-d1aa-4033-ba63-47873bdd97cc", 00:16:13.447 "strip_size_kb": 0, 00:16:13.447 "state": "configuring", 00:16:13.447 "raid_level": "raid1", 00:16:13.447 "superblock": true, 00:16:13.447 "num_base_bdevs": 2, 00:16:13.447 "num_base_bdevs_discovered": 1, 00:16:13.447 "num_base_bdevs_operational": 2, 00:16:13.447 "base_bdevs_list": [ 00:16:13.447 { 00:16:13.447 "name": "BaseBdev1", 00:16:13.447 "uuid": "425ad61f-e3eb-46d2-b6df-a83ecb4c9f4b", 00:16:13.447 "is_configured": true, 00:16:13.447 "data_offset": 2048, 00:16:13.447 "data_size": 63488 00:16:13.447 }, 00:16:13.447 { 00:16:13.447 "name": "BaseBdev2", 00:16:13.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.447 "is_configured": false, 00:16:13.447 "data_offset": 0, 00:16:13.447 "data_size": 0 00:16:13.447 } 00:16:13.447 ] 00:16:13.447 }' 00:16:13.447 06:08:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.447 06:08:44 -- common/autotest_common.sh@10 -- # set +x 00:16:14.071 06:08:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:14.331 [2024-06-11 06:08:44.775354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.331 [2024-06-11 06:08:44.775874] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:14.331 [2024-06-11 06:08:44.775988] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.331 [2024-06-11 06:08:44.776177] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:14.331 [2024-06-11 06:08:44.776699] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:14.331 [2024-06-11 06:08:44.776827] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:14.331 [2024-06-11 06:08:44.777070] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.331 BaseBdev2 00:16:14.331 06:08:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:14.331 06:08:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:14.331 06:08:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:14.331 06:08:44 -- common/autotest_common.sh@889 -- # local i 00:16:14.331 06:08:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:14.331 06:08:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:14.331 06:08:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:14.331 06:08:44 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:14.590 [ 00:16:14.590 { 00:16:14.590 "name": "BaseBdev2", 00:16:14.590 "aliases": [ 00:16:14.590 "b471539b-d781-4aff-b1ce-fad991692b86" 00:16:14.590 ], 00:16:14.590 "product_name": "Malloc disk", 00:16:14.590 "block_size": 512, 00:16:14.590 "num_blocks": 65536, 00:16:14.590 "uuid": "b471539b-d781-4aff-b1ce-fad991692b86", 00:16:14.590 "assigned_rate_limits": { 00:16:14.590 "rw_ios_per_sec": 0, 00:16:14.590 "rw_mbytes_per_sec": 0, 00:16:14.590 "r_mbytes_per_sec": 0, 00:16:14.590 "w_mbytes_per_sec": 0 00:16:14.590 }, 00:16:14.590 "claimed": true, 00:16:14.590 "claim_type": "exclusive_write", 00:16:14.590 "zoned": false, 00:16:14.590 "supported_io_types": { 00:16:14.590 "read": true, 00:16:14.590 "write": true, 00:16:14.590 "unmap": true, 00:16:14.590 "write_zeroes": true, 00:16:14.590 "flush": true, 00:16:14.590 "reset": true, 00:16:14.590 "compare": false, 00:16:14.590 "compare_and_write": false, 00:16:14.590 "abort": true, 00:16:14.590 "nvme_admin": false, 00:16:14.590 "nvme_io": false 00:16:14.590 }, 00:16:14.590 "memory_domains": [ 00:16:14.590 { 00:16:14.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.590 "dma_device_type": 2 00:16:14.590 } 00:16:14.590 ], 00:16:14.590 "driver_specific": {} 00:16:14.590 } 00:16:14.590 ] 00:16:14.590 06:08:45 -- common/autotest_common.sh@895 -- # return 0 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.590 06:08:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.849 06:08:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.849 "name": "Existed_Raid", 00:16:14.849 "uuid": "4c4e0c4b-d1aa-4033-ba63-47873bdd97cc", 00:16:14.849 "strip_size_kb": 0, 00:16:14.849 "state": "online", 00:16:14.849 "raid_level": "raid1", 00:16:14.849 "superblock": true, 00:16:14.849 "num_base_bdevs": 2, 00:16:14.849 "num_base_bdevs_discovered": 2, 00:16:14.849 "num_base_bdevs_operational": 2, 00:16:14.849 "base_bdevs_list": [ 00:16:14.849 { 00:16:14.849 "name": "BaseBdev1", 00:16:14.849 "uuid": "425ad61f-e3eb-46d2-b6df-a83ecb4c9f4b", 00:16:14.849 "is_configured": true, 00:16:14.849 "data_offset": 2048, 00:16:14.849 "data_size": 63488 00:16:14.849 }, 00:16:14.849 { 00:16:14.849 "name": "BaseBdev2", 00:16:14.849 "uuid": "b471539b-d781-4aff-b1ce-fad991692b86", 00:16:14.849 "is_configured": true, 00:16:14.850 "data_offset": 2048, 00:16:14.850 "data_size": 63488 00:16:14.850 } 00:16:14.850 ] 00:16:14.850 }' 00:16:14.850 06:08:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.850 06:08:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.418 06:08:46 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:15.677 [2024-06-11 06:08:46.183699] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.677 06:08:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.937 06:08:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.937 "name": "Existed_Raid", 00:16:15.937 "uuid": "4c4e0c4b-d1aa-4033-ba63-47873bdd97cc", 00:16:15.937 "strip_size_kb": 0, 00:16:15.937 "state": "online", 00:16:15.937 "raid_level": "raid1", 00:16:15.937 "superblock": true, 00:16:15.937 "num_base_bdevs": 2, 00:16:15.937 "num_base_bdevs_discovered": 1, 00:16:15.937 "num_base_bdevs_operational": 1, 00:16:15.937 "base_bdevs_list": [ 00:16:15.937 { 00:16:15.937 "name": null, 00:16:15.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.937 "is_configured": false, 00:16:15.937 "data_offset": 2048, 00:16:15.937 "data_size": 63488 00:16:15.937 }, 00:16:15.937 { 00:16:15.937 "name": "BaseBdev2", 00:16:15.937 "uuid": "b471539b-d781-4aff-b1ce-fad991692b86", 00:16:15.937 "is_configured": true, 00:16:15.937 "data_offset": 2048, 00:16:15.937 "data_size": 63488 00:16:15.937 } 00:16:15.937 ] 00:16:15.937 }' 00:16:15.937 06:08:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.937 06:08:46 -- common/autotest_common.sh@10 -- # set +x 00:16:16.506 06:08:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:16.506 06:08:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.506 06:08:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.506 06:08:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:16.765 06:08:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:16.765 06:08:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.765 06:08:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:17.027 [2024-06-11 06:08:47.549920] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.027 [2024-06-11 06:08:47.550113] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.027 [2024-06-11 06:08:47.550273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.027 [2024-06-11 06:08:47.652696] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:17.027 [2024-06-11 06:08:47.652873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:17.286 06:08:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:17.286 06:08:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:17.286 06:08:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.286 06:08:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.287 06:08:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:17.287 06:08:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:17.287 06:08:47 -- bdev/bdev_raid.sh@287 -- # killprocess 114893 00:16:17.287 06:08:47 -- common/autotest_common.sh@926 -- # '[' -z 114893 ']' 00:16:17.287 06:08:47 -- common/autotest_common.sh@930 -- # kill -0 114893 00:16:17.287 06:08:47 -- common/autotest_common.sh@931 -- # uname 00:16:17.287 06:08:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:17.547 06:08:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114893 00:16:17.547 06:08:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:17.547 06:08:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:17.547 06:08:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114893' 00:16:17.547 killing process with pid 114893 00:16:17.547 06:08:47 -- common/autotest_common.sh@945 -- # kill 114893 00:16:17.547 [2024-06-11 06:08:47.948195] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:17.547 06:08:47 -- common/autotest_common.sh@950 -- # wait 114893 00:16:17.547 [2024-06-11 06:08:47.948457] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.927 ************************************ 00:16:18.927 END TEST raid_state_function_test_sb 00:16:18.927 ************************************ 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:18.927 00:16:18.927 real 0m10.878s 00:16:18.927 user 0m17.808s 00:16:18.927 sys 0m1.841s 00:16:18.927 06:08:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:18.927 06:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:18.927 06:08:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:18.927 06:08:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:18.927 06:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:18.927 ************************************ 00:16:18.927 START TEST raid_superblock_test 00:16:18.927 ************************************ 00:16:18.927 06:08:49 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=115224 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115224 /var/tmp/spdk-raid.sock 00:16:18.927 06:08:49 -- common/autotest_common.sh@819 -- # '[' -z 115224 ']' 00:16:18.927 06:08:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:18.927 06:08:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:18.927 06:08:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.927 06:08:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:18.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:18.927 06:08:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.927 06:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:18.927 [2024-06-11 06:08:49.472616] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:18.927 [2024-06-11 06:08:49.473742] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115224 ] 00:16:19.186 [2024-06-11 06:08:49.657197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.445 [2024-06-11 06:08:49.902274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.811 [2024-06-11 06:08:50.150200] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:19.811 06:08:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.811 06:08:50 -- common/autotest_common.sh@852 -- # return 0 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.811 06:08:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:20.071 malloc1 00:16:20.071 06:08:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:20.330 [2024-06-11 06:08:50.850329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:20.330 [2024-06-11 06:08:50.850627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.330 [2024-06-11 06:08:50.850697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:20.330 [2024-06-11 06:08:50.850818] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.330 [2024-06-11 06:08:50.853615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.330 [2024-06-11 06:08:50.853774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:20.330 pt1 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:20.330 06:08:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:20.589 malloc2 00:16:20.589 06:08:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:20.849 [2024-06-11 06:08:51.322550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:20.849 [2024-06-11 06:08:51.322824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:20.849 [2024-06-11 06:08:51.323021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:20.849 [2024-06-11 06:08:51.323175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:20.849 [2024-06-11 06:08:51.325917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:20.849 [2024-06-11 06:08:51.326099] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:20.849 pt2 00:16:20.849 06:08:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:20.849 06:08:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:20.849 06:08:51 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:21.108 [2024-06-11 06:08:51.562661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.108 [2024-06-11 06:08:51.565048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:21.108 [2024-06-11 06:08:51.565381] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:16:21.108 [2024-06-11 06:08:51.565520] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.108 [2024-06-11 06:08:51.565707] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:21.108 [2024-06-11 06:08:51.566245] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:16:21.108 [2024-06-11 06:08:51.566353] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:16:21.108 [2024-06-11 06:08:51.566639] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.108 "name": "raid_bdev1", 00:16:21.108 "uuid": "fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139", 00:16:21.108 "strip_size_kb": 0, 00:16:21.108 "state": "online", 00:16:21.108 "raid_level": "raid1", 00:16:21.108 "superblock": true, 00:16:21.108 "num_base_bdevs": 2, 00:16:21.108 "num_base_bdevs_discovered": 2, 00:16:21.108 "num_base_bdevs_operational": 2, 00:16:21.108 "base_bdevs_list": [ 00:16:21.108 { 00:16:21.108 "name": "pt1", 00:16:21.108 "uuid": "60054faa-b865-52c2-96fc-236db3cc8ebc", 00:16:21.108 "is_configured": true, 00:16:21.108 "data_offset": 2048, 00:16:21.108 "data_size": 63488 00:16:21.108 }, 00:16:21.108 { 00:16:21.108 "name": "pt2", 00:16:21.108 "uuid": "324e554a-ae22-514d-adad-686e3c58a30a", 00:16:21.108 "is_configured": true, 00:16:21.108 "data_offset": 2048, 00:16:21.108 "data_size": 63488 00:16:21.108 } 00:16:21.108 ] 00:16:21.108 }' 00:16:21.108 06:08:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.108 06:08:51 -- common/autotest_common.sh@10 -- # set +x 00:16:21.676 06:08:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:21.676 06:08:52 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:21.934 [2024-06-11 06:08:52.508471] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.934 06:08:52 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139 00:16:21.934 06:08:52 -- bdev/bdev_raid.sh@380 -- # '[' -z fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139 ']' 00:16:21.934 06:08:52 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:22.193 [2024-06-11 06:08:52.752306] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.194 [2024-06-11 06:08:52.752489] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.194 [2024-06-11 06:08:52.752726] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.194 [2024-06-11 06:08:52.752855] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.194 [2024-06-11 06:08:52.753042] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:16:22.194 06:08:52 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.194 06:08:52 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:22.453 06:08:52 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:22.453 06:08:52 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:22.453 06:08:52 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.453 06:08:52 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:22.453 06:08:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:22.453 06:08:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:22.712 06:08:53 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:22.712 06:08:53 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:22.971 06:08:53 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:22.971 06:08:53 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.971 06:08:53 -- common/autotest_common.sh@640 -- # local es=0 00:16:22.971 06:08:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:22.971 06:08:53 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.971 06:08:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:22.971 06:08:53 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.971 06:08:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:22.971 06:08:53 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.971 06:08:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:22.971 06:08:53 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.971 06:08:53 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:22.971 06:08:53 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:23.231 [2024-06-11 06:08:53.764436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:23.231 [2024-06-11 06:08:53.766924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:23.231 [2024-06-11 06:08:53.767134] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:23.231 [2024-06-11 06:08:53.767310] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:23.231 [2024-06-11 06:08:53.767428] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.231 [2024-06-11 06:08:53.767465] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:16:23.231 request: 00:16:23.231 { 00:16:23.231 "name": "raid_bdev1", 00:16:23.231 "raid_level": "raid1", 00:16:23.231 "base_bdevs": [ 00:16:23.231 "malloc1", 00:16:23.231 "malloc2" 00:16:23.231 ], 00:16:23.231 "superblock": false, 00:16:23.231 "method": "bdev_raid_create", 00:16:23.231 "req_id": 1 00:16:23.231 } 00:16:23.231 Got JSON-RPC error response 00:16:23.231 response: 00:16:23.231 { 00:16:23.231 "code": -17, 00:16:23.231 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:23.231 } 00:16:23.231 06:08:53 -- common/autotest_common.sh@643 -- # es=1 00:16:23.231 06:08:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:23.231 06:08:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:23.231 06:08:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:23.231 06:08:53 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.231 06:08:53 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:23.490 06:08:53 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:23.490 06:08:53 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:23.490 06:08:53 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.490 [2024-06-11 06:08:54.104449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.490 [2024-06-11 06:08:54.104769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.490 [2024-06-11 06:08:54.104863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:23.490 [2024-06-11 06:08:54.104964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.491 [2024-06-11 06:08:54.107678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.491 [2024-06-11 06:08:54.107844] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.491 [2024-06-11 06:08:54.108059] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:23.491 [2024-06-11 06:08:54.108223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.491 pt1 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.491 06:08:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.750 06:08:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.750 "name": "raid_bdev1", 00:16:23.750 "uuid": "fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139", 00:16:23.750 "strip_size_kb": 0, 00:16:23.750 "state": "configuring", 00:16:23.750 "raid_level": "raid1", 00:16:23.750 "superblock": true, 00:16:23.750 "num_base_bdevs": 2, 00:16:23.750 "num_base_bdevs_discovered": 1, 00:16:23.750 "num_base_bdevs_operational": 2, 00:16:23.750 "base_bdevs_list": [ 00:16:23.750 { 00:16:23.750 "name": "pt1", 00:16:23.750 "uuid": "60054faa-b865-52c2-96fc-236db3cc8ebc", 00:16:23.750 "is_configured": true, 00:16:23.750 "data_offset": 2048, 00:16:23.750 "data_size": 63488 00:16:23.750 }, 00:16:23.750 { 00:16:23.750 "name": null, 00:16:23.750 "uuid": "324e554a-ae22-514d-adad-686e3c58a30a", 00:16:23.750 "is_configured": false, 00:16:23.750 "data_offset": 2048, 00:16:23.750 "data_size": 63488 00:16:23.750 } 00:16:23.750 ] 00:16:23.750 }' 00:16:23.750 06:08:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.750 06:08:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.319 06:08:54 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:24.319 06:08:54 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:24.319 06:08:54 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.319 06:08:54 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.578 [2024-06-11 06:08:55.136660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.578 [2024-06-11 06:08:55.137005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.578 [2024-06-11 06:08:55.137084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:24.578 [2024-06-11 06:08:55.137196] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.578 [2024-06-11 06:08:55.137748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.578 [2024-06-11 06:08:55.137908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.578 [2024-06-11 06:08:55.138113] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:24.578 [2024-06-11 06:08:55.138211] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.578 [2024-06-11 06:08:55.138383] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:16:24.578 [2024-06-11 06:08:55.138467] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.578 [2024-06-11 06:08:55.138620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:24.578 [2024-06-11 06:08:55.139115] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:16:24.578 [2024-06-11 06:08:55.139225] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:16:24.578 [2024-06-11 06:08:55.139434] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.578 pt2 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.578 06:08:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.579 06:08:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.579 06:08:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.838 06:08:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.838 "name": "raid_bdev1", 00:16:24.838 "uuid": "fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139", 00:16:24.838 "strip_size_kb": 0, 00:16:24.838 "state": "online", 00:16:24.838 "raid_level": "raid1", 00:16:24.838 "superblock": true, 00:16:24.838 "num_base_bdevs": 2, 00:16:24.838 "num_base_bdevs_discovered": 2, 00:16:24.838 "num_base_bdevs_operational": 2, 00:16:24.838 "base_bdevs_list": [ 00:16:24.838 { 00:16:24.838 "name": "pt1", 00:16:24.838 "uuid": "60054faa-b865-52c2-96fc-236db3cc8ebc", 00:16:24.838 "is_configured": true, 00:16:24.838 "data_offset": 2048, 00:16:24.838 "data_size": 63488 00:16:24.838 }, 00:16:24.838 { 00:16:24.838 "name": "pt2", 00:16:24.838 "uuid": "324e554a-ae22-514d-adad-686e3c58a30a", 00:16:24.838 "is_configured": true, 00:16:24.838 "data_offset": 2048, 00:16:24.838 "data_size": 63488 00:16:24.838 } 00:16:24.838 ] 00:16:24.838 }' 00:16:24.838 06:08:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.838 06:08:55 -- common/autotest_common.sh@10 -- # set +x 00:16:25.406 06:08:55 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.406 06:08:55 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:25.666 [2024-06-11 06:08:56.081016] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.666 06:08:56 -- bdev/bdev_raid.sh@430 -- # '[' fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139 '!=' fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139 ']' 00:16:25.666 06:08:56 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:25.666 06:08:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:25.666 06:08:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:25.666 06:08:56 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:25.925 [2024-06-11 06:08:56.336919] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.925 06:08:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.184 06:08:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.184 "name": "raid_bdev1", 00:16:26.184 "uuid": "fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139", 00:16:26.184 "strip_size_kb": 0, 00:16:26.184 "state": "online", 00:16:26.184 "raid_level": "raid1", 00:16:26.184 "superblock": true, 00:16:26.184 "num_base_bdevs": 2, 00:16:26.184 "num_base_bdevs_discovered": 1, 00:16:26.184 "num_base_bdevs_operational": 1, 00:16:26.184 "base_bdevs_list": [ 00:16:26.184 { 00:16:26.184 "name": null, 00:16:26.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.184 "is_configured": false, 00:16:26.184 "data_offset": 2048, 00:16:26.184 "data_size": 63488 00:16:26.184 }, 00:16:26.184 { 00:16:26.184 "name": "pt2", 00:16:26.184 "uuid": "324e554a-ae22-514d-adad-686e3c58a30a", 00:16:26.184 "is_configured": true, 00:16:26.184 "data_offset": 2048, 00:16:26.184 "data_size": 63488 00:16:26.184 } 00:16:26.184 ] 00:16:26.184 }' 00:16:26.184 06:08:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.184 06:08:56 -- common/autotest_common.sh@10 -- # set +x 00:16:26.753 06:08:57 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:27.012 [2024-06-11 06:08:57.421065] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.012 [2024-06-11 06:08:57.421290] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.012 [2024-06-11 06:08:57.421513] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.012 [2024-06-11 06:08:57.421645] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.012 [2024-06-11 06:08:57.421723] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:27.012 06:08:57 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:27.012 06:08:57 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:27.272 06:08:57 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.531 [2024-06-11 06:08:58.022512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.531 [2024-06-11 06:08:58.023374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.531 [2024-06-11 06:08:58.023732] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:27.531 [2024-06-11 06:08:58.024059] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.531 [2024-06-11 06:08:58.028097] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.531 [2024-06-11 06:08:58.028458] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.531 [2024-06-11 06:08:58.028941] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:27.532 [2024-06-11 06:08:58.029160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.532 [2024-06-11 06:08:58.029493] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:27.532 [2024-06-11 06:08:58.029628] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.532 [2024-06-11 06:08:58.029861] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:27.532 pt2 00:16:27.532 [2024-06-11 06:08:58.030449] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:27.532 [2024-06-11 06:08:58.030598] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:27.532 [2024-06-11 06:08:58.030883] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.532 06:08:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.790 06:08:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:27.790 "name": "raid_bdev1", 00:16:27.790 "uuid": "fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139", 00:16:27.790 "strip_size_kb": 0, 00:16:27.790 "state": "online", 00:16:27.790 "raid_level": "raid1", 00:16:27.790 "superblock": true, 00:16:27.790 "num_base_bdevs": 2, 00:16:27.790 "num_base_bdevs_discovered": 1, 00:16:27.790 "num_base_bdevs_operational": 1, 00:16:27.790 "base_bdevs_list": [ 00:16:27.790 { 00:16:27.790 "name": null, 00:16:27.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.790 "is_configured": false, 00:16:27.790 "data_offset": 2048, 00:16:27.790 "data_size": 63488 00:16:27.790 }, 00:16:27.790 { 00:16:27.790 "name": "pt2", 00:16:27.790 "uuid": "324e554a-ae22-514d-adad-686e3c58a30a", 00:16:27.790 "is_configured": true, 00:16:27.790 "data_offset": 2048, 00:16:27.790 "data_size": 63488 00:16:27.790 } 00:16:27.790 ] 00:16:27.790 }' 00:16:27.790 06:08:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:27.790 06:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.359 06:08:58 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:28.359 06:08:58 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:28.359 06:08:58 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:28.618 [2024-06-11 06:08:59.085328] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.618 06:08:59 -- bdev/bdev_raid.sh@506 -- # '[' fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139 '!=' fc3ebaed-a7f6-49bb-ba5d-19fd6edbe139 ']' 00:16:28.618 06:08:59 -- bdev/bdev_raid.sh@511 -- # killprocess 115224 00:16:28.618 06:08:59 -- common/autotest_common.sh@926 -- # '[' -z 115224 ']' 00:16:28.618 06:08:59 -- common/autotest_common.sh@930 -- # kill -0 115224 00:16:28.618 06:08:59 -- common/autotest_common.sh@931 -- # uname 00:16:28.618 06:08:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.618 06:08:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115224 00:16:28.618 06:08:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:28.618 06:08:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:28.618 06:08:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115224' 00:16:28.618 killing process with pid 115224 00:16:28.618 06:08:59 -- common/autotest_common.sh@945 -- # kill 115224 00:16:28.618 06:08:59 -- common/autotest_common.sh@950 -- # wait 115224 00:16:28.618 [2024-06-11 06:08:59.131616] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.618 [2024-06-11 06:08:59.131699] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.618 [2024-06-11 06:08:59.131755] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.618 [2024-06-11 06:08:59.131764] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:28.876 [2024-06-11 06:08:59.334398] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:30.254 ************************************ 00:16:30.254 END TEST raid_superblock_test 00:16:30.254 ************************************ 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:30.254 00:16:30.254 real 0m11.319s 00:16:30.254 user 0m18.788s 00:16:30.254 sys 0m2.022s 00:16:30.254 06:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.254 06:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:30.254 06:09:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:30.254 06:09:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.254 06:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.254 ************************************ 00:16:30.254 START TEST raid_state_function_test 00:16:30.254 ************************************ 00:16:30.254 06:09:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=115576 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115576' 00:16:30.254 Process raid pid: 115576 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:30.254 06:09:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115576 /var/tmp/spdk-raid.sock 00:16:30.254 06:09:00 -- common/autotest_common.sh@819 -- # '[' -z 115576 ']' 00:16:30.254 06:09:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:30.254 06:09:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.254 06:09:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:30.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:30.254 06:09:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.254 06:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.254 [2024-06-11 06:09:00.880608] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:30.254 [2024-06-11 06:09:00.881147] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.513 [2024-06-11 06:09:01.056369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.772 [2024-06-11 06:09:01.298347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.031 [2024-06-11 06:09:01.546820] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.289 06:09:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.289 06:09:01 -- common/autotest_common.sh@852 -- # return 0 00:16:31.289 06:09:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.547 [2024-06-11 06:09:01.993922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.547 [2024-06-11 06:09:01.994251] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.547 [2024-06-11 06:09:01.994340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.547 [2024-06-11 06:09:01.994396] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.547 [2024-06-11 06:09:01.994422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.547 [2024-06-11 06:09:01.994528] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.547 06:09:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.806 06:09:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.806 "name": "Existed_Raid", 00:16:31.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.806 "strip_size_kb": 64, 00:16:31.806 "state": "configuring", 00:16:31.806 "raid_level": "raid0", 00:16:31.806 "superblock": false, 00:16:31.806 "num_base_bdevs": 3, 00:16:31.806 "num_base_bdevs_discovered": 0, 00:16:31.806 "num_base_bdevs_operational": 3, 00:16:31.806 "base_bdevs_list": [ 00:16:31.806 { 00:16:31.806 "name": "BaseBdev1", 00:16:31.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.806 "is_configured": false, 00:16:31.806 "data_offset": 0, 00:16:31.806 "data_size": 0 00:16:31.806 }, 00:16:31.806 { 00:16:31.806 "name": "BaseBdev2", 00:16:31.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.806 "is_configured": false, 00:16:31.806 "data_offset": 0, 00:16:31.806 "data_size": 0 00:16:31.806 }, 00:16:31.806 { 00:16:31.806 "name": "BaseBdev3", 00:16:31.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.806 "is_configured": false, 00:16:31.806 "data_offset": 0, 00:16:31.806 "data_size": 0 00:16:31.806 } 00:16:31.806 ] 00:16:31.806 }' 00:16:31.806 06:09:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.806 06:09:02 -- common/autotest_common.sh@10 -- # set +x 00:16:32.373 06:09:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.373 [2024-06-11 06:09:02.925989] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.373 [2024-06-11 06:09:02.926238] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:32.373 06:09:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:32.632 [2024-06-11 06:09:03.098075] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.632 [2024-06-11 06:09:03.098324] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.632 [2024-06-11 06:09:03.098405] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.632 [2024-06-11 06:09:03.098467] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.632 [2024-06-11 06:09:03.098494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.632 [2024-06-11 06:09:03.098545] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.632 06:09:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.890 [2024-06-11 06:09:03.314012] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.890 BaseBdev1 00:16:32.890 06:09:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:32.890 06:09:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:32.890 06:09:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:32.890 06:09:03 -- common/autotest_common.sh@889 -- # local i 00:16:32.890 06:09:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:32.890 06:09:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:32.890 06:09:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.148 06:09:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:33.148 [ 00:16:33.148 { 00:16:33.148 "name": "BaseBdev1", 00:16:33.148 "aliases": [ 00:16:33.148 "af14c985-caa5-491a-8168-5a83035e1e1c" 00:16:33.148 ], 00:16:33.148 "product_name": "Malloc disk", 00:16:33.148 "block_size": 512, 00:16:33.148 "num_blocks": 65536, 00:16:33.148 "uuid": "af14c985-caa5-491a-8168-5a83035e1e1c", 00:16:33.148 "assigned_rate_limits": { 00:16:33.148 "rw_ios_per_sec": 0, 00:16:33.148 "rw_mbytes_per_sec": 0, 00:16:33.148 "r_mbytes_per_sec": 0, 00:16:33.148 "w_mbytes_per_sec": 0 00:16:33.148 }, 00:16:33.148 "claimed": true, 00:16:33.148 "claim_type": "exclusive_write", 00:16:33.148 "zoned": false, 00:16:33.148 "supported_io_types": { 00:16:33.148 "read": true, 00:16:33.148 "write": true, 00:16:33.148 "unmap": true, 00:16:33.148 "write_zeroes": true, 00:16:33.148 "flush": true, 00:16:33.148 "reset": true, 00:16:33.148 "compare": false, 00:16:33.148 "compare_and_write": false, 00:16:33.148 "abort": true, 00:16:33.148 "nvme_admin": false, 00:16:33.148 "nvme_io": false 00:16:33.148 }, 00:16:33.148 "memory_domains": [ 00:16:33.148 { 00:16:33.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.148 "dma_device_type": 2 00:16:33.148 } 00:16:33.148 ], 00:16:33.148 "driver_specific": {} 00:16:33.148 } 00:16:33.148 ] 00:16:33.406 06:09:03 -- common/autotest_common.sh@895 -- # return 0 00:16:33.406 06:09:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:33.406 06:09:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.406 06:09:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.406 06:09:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.407 "name": "Existed_Raid", 00:16:33.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.407 "strip_size_kb": 64, 00:16:33.407 "state": "configuring", 00:16:33.407 "raid_level": "raid0", 00:16:33.407 "superblock": false, 00:16:33.407 "num_base_bdevs": 3, 00:16:33.407 "num_base_bdevs_discovered": 1, 00:16:33.407 "num_base_bdevs_operational": 3, 00:16:33.407 "base_bdevs_list": [ 00:16:33.407 { 00:16:33.407 "name": "BaseBdev1", 00:16:33.407 "uuid": "af14c985-caa5-491a-8168-5a83035e1e1c", 00:16:33.407 "is_configured": true, 00:16:33.407 "data_offset": 0, 00:16:33.407 "data_size": 65536 00:16:33.407 }, 00:16:33.407 { 00:16:33.407 "name": "BaseBdev2", 00:16:33.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.407 "is_configured": false, 00:16:33.407 "data_offset": 0, 00:16:33.407 "data_size": 0 00:16:33.407 }, 00:16:33.407 { 00:16:33.407 "name": "BaseBdev3", 00:16:33.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.407 "is_configured": false, 00:16:33.407 "data_offset": 0, 00:16:33.407 "data_size": 0 00:16:33.407 } 00:16:33.407 ] 00:16:33.407 }' 00:16:33.407 06:09:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.407 06:09:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.973 06:09:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:34.231 [2024-06-11 06:09:04.742291] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.231 [2024-06-11 06:09:04.742518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:34.231 06:09:04 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:34.231 06:09:04 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:34.489 [2024-06-11 06:09:05.010428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.489 [2024-06-11 06:09:05.012728] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.489 [2024-06-11 06:09:05.012912] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.489 [2024-06-11 06:09:05.012994] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.489 [2024-06-11 06:09:05.013052] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.489 06:09:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:34.489 06:09:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:34.489 06:09:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.490 06:09:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.748 06:09:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.748 "name": "Existed_Raid", 00:16:34.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.748 "strip_size_kb": 64, 00:16:34.748 "state": "configuring", 00:16:34.748 "raid_level": "raid0", 00:16:34.748 "superblock": false, 00:16:34.748 "num_base_bdevs": 3, 00:16:34.748 "num_base_bdevs_discovered": 1, 00:16:34.748 "num_base_bdevs_operational": 3, 00:16:34.748 "base_bdevs_list": [ 00:16:34.748 { 00:16:34.748 "name": "BaseBdev1", 00:16:34.748 "uuid": "af14c985-caa5-491a-8168-5a83035e1e1c", 00:16:34.748 "is_configured": true, 00:16:34.748 "data_offset": 0, 00:16:34.748 "data_size": 65536 00:16:34.748 }, 00:16:34.748 { 00:16:34.748 "name": "BaseBdev2", 00:16:34.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.748 "is_configured": false, 00:16:34.748 "data_offset": 0, 00:16:34.748 "data_size": 0 00:16:34.748 }, 00:16:34.748 { 00:16:34.748 "name": "BaseBdev3", 00:16:34.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.748 "is_configured": false, 00:16:34.748 "data_offset": 0, 00:16:34.748 "data_size": 0 00:16:34.748 } 00:16:34.748 ] 00:16:34.748 }' 00:16:34.748 06:09:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.748 06:09:05 -- common/autotest_common.sh@10 -- # set +x 00:16:35.315 06:09:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.574 [2024-06-11 06:09:06.002295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.574 BaseBdev2 00:16:35.574 06:09:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:35.574 06:09:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:35.574 06:09:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:35.574 06:09:06 -- common/autotest_common.sh@889 -- # local i 00:16:35.574 06:09:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:35.574 06:09:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:35.574 06:09:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.832 06:09:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.090 [ 00:16:36.091 { 00:16:36.091 "name": "BaseBdev2", 00:16:36.091 "aliases": [ 00:16:36.091 "1e6e6cc9-a265-404e-9f6f-b191fa7b7c10" 00:16:36.091 ], 00:16:36.091 "product_name": "Malloc disk", 00:16:36.091 "block_size": 512, 00:16:36.091 "num_blocks": 65536, 00:16:36.091 "uuid": "1e6e6cc9-a265-404e-9f6f-b191fa7b7c10", 00:16:36.091 "assigned_rate_limits": { 00:16:36.091 "rw_ios_per_sec": 0, 00:16:36.091 "rw_mbytes_per_sec": 0, 00:16:36.091 "r_mbytes_per_sec": 0, 00:16:36.091 "w_mbytes_per_sec": 0 00:16:36.091 }, 00:16:36.091 "claimed": true, 00:16:36.091 "claim_type": "exclusive_write", 00:16:36.091 "zoned": false, 00:16:36.091 "supported_io_types": { 00:16:36.091 "read": true, 00:16:36.091 "write": true, 00:16:36.091 "unmap": true, 00:16:36.091 "write_zeroes": true, 00:16:36.091 "flush": true, 00:16:36.091 "reset": true, 00:16:36.091 "compare": false, 00:16:36.091 "compare_and_write": false, 00:16:36.091 "abort": true, 00:16:36.091 "nvme_admin": false, 00:16:36.091 "nvme_io": false 00:16:36.091 }, 00:16:36.091 "memory_domains": [ 00:16:36.091 { 00:16:36.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.091 "dma_device_type": 2 00:16:36.091 } 00:16:36.091 ], 00:16:36.091 "driver_specific": {} 00:16:36.091 } 00:16:36.091 ] 00:16:36.091 06:09:06 -- common/autotest_common.sh@895 -- # return 0 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.091 "name": "Existed_Raid", 00:16:36.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.091 "strip_size_kb": 64, 00:16:36.091 "state": "configuring", 00:16:36.091 "raid_level": "raid0", 00:16:36.091 "superblock": false, 00:16:36.091 "num_base_bdevs": 3, 00:16:36.091 "num_base_bdevs_discovered": 2, 00:16:36.091 "num_base_bdevs_operational": 3, 00:16:36.091 "base_bdevs_list": [ 00:16:36.091 { 00:16:36.091 "name": "BaseBdev1", 00:16:36.091 "uuid": "af14c985-caa5-491a-8168-5a83035e1e1c", 00:16:36.091 "is_configured": true, 00:16:36.091 "data_offset": 0, 00:16:36.091 "data_size": 65536 00:16:36.091 }, 00:16:36.091 { 00:16:36.091 "name": "BaseBdev2", 00:16:36.091 "uuid": "1e6e6cc9-a265-404e-9f6f-b191fa7b7c10", 00:16:36.091 "is_configured": true, 00:16:36.091 "data_offset": 0, 00:16:36.091 "data_size": 65536 00:16:36.091 }, 00:16:36.091 { 00:16:36.091 "name": "BaseBdev3", 00:16:36.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.091 "is_configured": false, 00:16:36.091 "data_offset": 0, 00:16:36.091 "data_size": 0 00:16:36.091 } 00:16:36.091 ] 00:16:36.091 }' 00:16:36.091 06:09:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.091 06:09:06 -- common/autotest_common.sh@10 -- # set +x 00:16:36.658 06:09:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.916 [2024-06-11 06:09:07.453088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:36.916 [2024-06-11 06:09:07.453409] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:36.917 [2024-06-11 06:09:07.453453] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:36.917 [2024-06-11 06:09:07.453679] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:36.917 [2024-06-11 06:09:07.454201] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:36.917 [2024-06-11 06:09:07.454311] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:36.917 [2024-06-11 06:09:07.454648] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.917 BaseBdev3 00:16:36.917 06:09:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:36.917 06:09:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:36.917 06:09:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:36.917 06:09:07 -- common/autotest_common.sh@889 -- # local i 00:16:36.917 06:09:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:36.917 06:09:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:36.917 06:09:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.175 06:09:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.434 [ 00:16:37.434 { 00:16:37.434 "name": "BaseBdev3", 00:16:37.434 "aliases": [ 00:16:37.434 "3b55edb2-bb1f-4028-9d63-32603a96bfa3" 00:16:37.434 ], 00:16:37.434 "product_name": "Malloc disk", 00:16:37.434 "block_size": 512, 00:16:37.434 "num_blocks": 65536, 00:16:37.434 "uuid": "3b55edb2-bb1f-4028-9d63-32603a96bfa3", 00:16:37.434 "assigned_rate_limits": { 00:16:37.434 "rw_ios_per_sec": 0, 00:16:37.434 "rw_mbytes_per_sec": 0, 00:16:37.434 "r_mbytes_per_sec": 0, 00:16:37.434 "w_mbytes_per_sec": 0 00:16:37.434 }, 00:16:37.434 "claimed": true, 00:16:37.434 "claim_type": "exclusive_write", 00:16:37.434 "zoned": false, 00:16:37.434 "supported_io_types": { 00:16:37.434 "read": true, 00:16:37.434 "write": true, 00:16:37.434 "unmap": true, 00:16:37.434 "write_zeroes": true, 00:16:37.434 "flush": true, 00:16:37.434 "reset": true, 00:16:37.434 "compare": false, 00:16:37.434 "compare_and_write": false, 00:16:37.434 "abort": true, 00:16:37.434 "nvme_admin": false, 00:16:37.434 "nvme_io": false 00:16:37.434 }, 00:16:37.434 "memory_domains": [ 00:16:37.434 { 00:16:37.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.434 "dma_device_type": 2 00:16:37.434 } 00:16:37.434 ], 00:16:37.434 "driver_specific": {} 00:16:37.434 } 00:16:37.434 ] 00:16:37.434 06:09:07 -- common/autotest_common.sh@895 -- # return 0 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.434 06:09:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.434 06:09:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:37.434 "name": "Existed_Raid", 00:16:37.434 "uuid": "b99681fc-d47b-4587-98e0-72c1fc0d1bbd", 00:16:37.434 "strip_size_kb": 64, 00:16:37.434 "state": "online", 00:16:37.434 "raid_level": "raid0", 00:16:37.434 "superblock": false, 00:16:37.434 "num_base_bdevs": 3, 00:16:37.434 "num_base_bdevs_discovered": 3, 00:16:37.434 "num_base_bdevs_operational": 3, 00:16:37.435 "base_bdevs_list": [ 00:16:37.435 { 00:16:37.435 "name": "BaseBdev1", 00:16:37.435 "uuid": "af14c985-caa5-491a-8168-5a83035e1e1c", 00:16:37.435 "is_configured": true, 00:16:37.435 "data_offset": 0, 00:16:37.435 "data_size": 65536 00:16:37.435 }, 00:16:37.435 { 00:16:37.435 "name": "BaseBdev2", 00:16:37.435 "uuid": "1e6e6cc9-a265-404e-9f6f-b191fa7b7c10", 00:16:37.435 "is_configured": true, 00:16:37.435 "data_offset": 0, 00:16:37.435 "data_size": 65536 00:16:37.435 }, 00:16:37.435 { 00:16:37.435 "name": "BaseBdev3", 00:16:37.435 "uuid": "3b55edb2-bb1f-4028-9d63-32603a96bfa3", 00:16:37.435 "is_configured": true, 00:16:37.435 "data_offset": 0, 00:16:37.435 "data_size": 65536 00:16:37.435 } 00:16:37.435 ] 00:16:37.435 }' 00:16:37.435 06:09:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:37.435 06:09:08 -- common/autotest_common.sh@10 -- # set +x 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:38.370 [2024-06-11 06:09:08.865450] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:38.370 [2024-06-11 06:09:08.865676] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.370 [2024-06-11 06:09:08.865914] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.370 06:09:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.628 06:09:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.628 "name": "Existed_Raid", 00:16:38.628 "uuid": "b99681fc-d47b-4587-98e0-72c1fc0d1bbd", 00:16:38.628 "strip_size_kb": 64, 00:16:38.628 "state": "offline", 00:16:38.628 "raid_level": "raid0", 00:16:38.628 "superblock": false, 00:16:38.628 "num_base_bdevs": 3, 00:16:38.628 "num_base_bdevs_discovered": 2, 00:16:38.628 "num_base_bdevs_operational": 2, 00:16:38.628 "base_bdevs_list": [ 00:16:38.628 { 00:16:38.628 "name": null, 00:16:38.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.628 "is_configured": false, 00:16:38.628 "data_offset": 0, 00:16:38.629 "data_size": 65536 00:16:38.629 }, 00:16:38.629 { 00:16:38.629 "name": "BaseBdev2", 00:16:38.629 "uuid": "1e6e6cc9-a265-404e-9f6f-b191fa7b7c10", 00:16:38.629 "is_configured": true, 00:16:38.629 "data_offset": 0, 00:16:38.629 "data_size": 65536 00:16:38.629 }, 00:16:38.629 { 00:16:38.629 "name": "BaseBdev3", 00:16:38.629 "uuid": "3b55edb2-bb1f-4028-9d63-32603a96bfa3", 00:16:38.629 "is_configured": true, 00:16:38.629 "data_offset": 0, 00:16:38.629 "data_size": 65536 00:16:38.629 } 00:16:38.629 ] 00:16:38.629 }' 00:16:38.629 06:09:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.629 06:09:09 -- common/autotest_common.sh@10 -- # set +x 00:16:39.197 06:09:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:39.197 06:09:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:39.197 06:09:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.197 06:09:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:39.456 06:09:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:39.456 06:09:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.456 06:09:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:39.715 [2024-06-11 06:09:10.223154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.715 06:09:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:39.715 06:09:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:39.715 06:09:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.715 06:09:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:39.973 06:09:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:39.973 06:09:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.973 06:09:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:40.233 [2024-06-11 06:09:10.708083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.233 [2024-06-11 06:09:10.708327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:40.233 06:09:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:40.233 06:09:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.233 06:09:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.233 06:09:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.492 06:09:11 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:40.492 06:09:11 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:40.492 06:09:11 -- bdev/bdev_raid.sh@287 -- # killprocess 115576 00:16:40.492 06:09:11 -- common/autotest_common.sh@926 -- # '[' -z 115576 ']' 00:16:40.492 06:09:11 -- common/autotest_common.sh@930 -- # kill -0 115576 00:16:40.492 06:09:11 -- common/autotest_common.sh@931 -- # uname 00:16:40.492 06:09:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:40.492 06:09:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115576 00:16:40.492 06:09:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:40.492 06:09:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:40.492 06:09:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115576' 00:16:40.492 killing process with pid 115576 00:16:40.492 06:09:11 -- common/autotest_common.sh@945 -- # kill 115576 00:16:40.492 [2024-06-11 06:09:11.049376] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.492 [2024-06-11 06:09:11.049630] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.492 06:09:11 -- common/autotest_common.sh@950 -- # wait 115576 00:16:41.871 ************************************ 00:16:41.871 END TEST raid_state_function_test 00:16:41.871 ************************************ 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:41.871 00:16:41.871 real 0m11.639s 00:16:41.871 user 0m19.135s 00:16:41.871 sys 0m2.092s 00:16:41.871 06:09:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.871 06:09:12 -- common/autotest_common.sh@10 -- # set +x 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:41.871 06:09:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:41.871 06:09:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:41.871 06:09:12 -- common/autotest_common.sh@10 -- # set +x 00:16:41.871 ************************************ 00:16:41.871 START TEST raid_state_function_test_sb 00:16:41.871 ************************************ 00:16:41.871 06:09:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=115946 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:41.871 Process raid pid: 115946 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115946' 00:16:41.871 06:09:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115946 /var/tmp/spdk-raid.sock 00:16:41.871 06:09:12 -- common/autotest_common.sh@819 -- # '[' -z 115946 ']' 00:16:41.871 06:09:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:41.871 06:09:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:41.871 06:09:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:41.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:41.871 06:09:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:41.871 06:09:12 -- common/autotest_common.sh@10 -- # set +x 00:16:42.130 [2024-06-11 06:09:12.574499] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:42.130 [2024-06-11 06:09:12.574852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.130 [2024-06-11 06:09:12.736403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.388 [2024-06-11 06:09:12.978414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.647 [2024-06-11 06:09:13.221802] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.906 06:09:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.906 06:09:13 -- common/autotest_common.sh@852 -- # return 0 00:16:42.906 06:09:13 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:43.165 [2024-06-11 06:09:13.736854] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.165 [2024-06-11 06:09:13.737185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.165 [2024-06-11 06:09:13.737276] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.165 [2024-06-11 06:09:13.737330] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.165 [2024-06-11 06:09:13.737358] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:43.165 [2024-06-11 06:09:13.737426] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.165 06:09:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.424 06:09:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.424 "name": "Existed_Raid", 00:16:43.424 "uuid": "040448a3-4600-4513-9aa0-71776f7d2850", 00:16:43.424 "strip_size_kb": 64, 00:16:43.424 "state": "configuring", 00:16:43.424 "raid_level": "raid0", 00:16:43.424 "superblock": true, 00:16:43.424 "num_base_bdevs": 3, 00:16:43.424 "num_base_bdevs_discovered": 0, 00:16:43.424 "num_base_bdevs_operational": 3, 00:16:43.424 "base_bdevs_list": [ 00:16:43.424 { 00:16:43.424 "name": "BaseBdev1", 00:16:43.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.424 "is_configured": false, 00:16:43.424 "data_offset": 0, 00:16:43.424 "data_size": 0 00:16:43.424 }, 00:16:43.424 { 00:16:43.424 "name": "BaseBdev2", 00:16:43.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.424 "is_configured": false, 00:16:43.424 "data_offset": 0, 00:16:43.424 "data_size": 0 00:16:43.424 }, 00:16:43.424 { 00:16:43.424 "name": "BaseBdev3", 00:16:43.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.424 "is_configured": false, 00:16:43.424 "data_offset": 0, 00:16:43.424 "data_size": 0 00:16:43.424 } 00:16:43.424 ] 00:16:43.424 }' 00:16:43.424 06:09:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.424 06:09:13 -- common/autotest_common.sh@10 -- # set +x 00:16:44.078 06:09:14 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:44.078 [2024-06-11 06:09:14.688875] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.078 [2024-06-11 06:09:14.689114] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:44.078 06:09:14 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:44.345 [2024-06-11 06:09:14.933022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:44.346 [2024-06-11 06:09:14.933232] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:44.346 [2024-06-11 06:09:14.933315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.346 [2024-06-11 06:09:14.933374] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.346 [2024-06-11 06:09:14.933401] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:44.346 [2024-06-11 06:09:14.933447] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:44.346 06:09:14 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:44.611 [2024-06-11 06:09:15.156526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.611 BaseBdev1 00:16:44.611 06:09:15 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:44.611 06:09:15 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:44.611 06:09:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.611 06:09:15 -- common/autotest_common.sh@889 -- # local i 00:16:44.611 06:09:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.611 06:09:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.611 06:09:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.870 06:09:15 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:44.870 [ 00:16:44.870 { 00:16:44.870 "name": "BaseBdev1", 00:16:44.870 "aliases": [ 00:16:44.870 "1b5fa693-694a-4931-91b2-8ee8798ff57f" 00:16:44.870 ], 00:16:44.870 "product_name": "Malloc disk", 00:16:44.870 "block_size": 512, 00:16:44.870 "num_blocks": 65536, 00:16:44.870 "uuid": "1b5fa693-694a-4931-91b2-8ee8798ff57f", 00:16:44.870 "assigned_rate_limits": { 00:16:44.870 "rw_ios_per_sec": 0, 00:16:44.870 "rw_mbytes_per_sec": 0, 00:16:44.870 "r_mbytes_per_sec": 0, 00:16:44.870 "w_mbytes_per_sec": 0 00:16:44.870 }, 00:16:44.870 "claimed": true, 00:16:44.870 "claim_type": "exclusive_write", 00:16:44.870 "zoned": false, 00:16:44.870 "supported_io_types": { 00:16:44.870 "read": true, 00:16:44.870 "write": true, 00:16:44.870 "unmap": true, 00:16:44.870 "write_zeroes": true, 00:16:44.870 "flush": true, 00:16:44.870 "reset": true, 00:16:44.870 "compare": false, 00:16:44.870 "compare_and_write": false, 00:16:44.870 "abort": true, 00:16:44.870 "nvme_admin": false, 00:16:44.870 "nvme_io": false 00:16:44.870 }, 00:16:44.870 "memory_domains": [ 00:16:44.870 { 00:16:44.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.870 "dma_device_type": 2 00:16:44.870 } 00:16:44.870 ], 00:16:44.870 "driver_specific": {} 00:16:44.870 } 00:16:44.870 ] 00:16:45.129 06:09:15 -- common/autotest_common.sh@895 -- # return 0 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.129 "name": "Existed_Raid", 00:16:45.129 "uuid": "0670fad0-feb7-4ff5-96f8-d357ca665d7d", 00:16:45.129 "strip_size_kb": 64, 00:16:45.129 "state": "configuring", 00:16:45.129 "raid_level": "raid0", 00:16:45.129 "superblock": true, 00:16:45.129 "num_base_bdevs": 3, 00:16:45.129 "num_base_bdevs_discovered": 1, 00:16:45.129 "num_base_bdevs_operational": 3, 00:16:45.129 "base_bdevs_list": [ 00:16:45.129 { 00:16:45.129 "name": "BaseBdev1", 00:16:45.129 "uuid": "1b5fa693-694a-4931-91b2-8ee8798ff57f", 00:16:45.129 "is_configured": true, 00:16:45.129 "data_offset": 2048, 00:16:45.129 "data_size": 63488 00:16:45.129 }, 00:16:45.129 { 00:16:45.129 "name": "BaseBdev2", 00:16:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.129 "is_configured": false, 00:16:45.129 "data_offset": 0, 00:16:45.129 "data_size": 0 00:16:45.129 }, 00:16:45.129 { 00:16:45.129 "name": "BaseBdev3", 00:16:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.129 "is_configured": false, 00:16:45.129 "data_offset": 0, 00:16:45.129 "data_size": 0 00:16:45.129 } 00:16:45.129 ] 00:16:45.129 }' 00:16:45.129 06:09:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.129 06:09:15 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 06:09:16 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:45.956 [2024-06-11 06:09:16.524796] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:45.956 [2024-06-11 06:09:16.525052] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:45.956 06:09:16 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:45.956 06:09:16 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:46.215 06:09:16 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:46.473 BaseBdev1 00:16:46.474 06:09:17 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:46.474 06:09:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:46.474 06:09:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:46.474 06:09:17 -- common/autotest_common.sh@889 -- # local i 00:16:46.474 06:09:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:46.474 06:09:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:46.474 06:09:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:46.733 06:09:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:46.992 [ 00:16:46.992 { 00:16:46.992 "name": "BaseBdev1", 00:16:46.992 "aliases": [ 00:16:46.992 "8ebadb96-8711-428f-bb25-ed9ab82ba12c" 00:16:46.992 ], 00:16:46.992 "product_name": "Malloc disk", 00:16:46.992 "block_size": 512, 00:16:46.992 "num_blocks": 65536, 00:16:46.992 "uuid": "8ebadb96-8711-428f-bb25-ed9ab82ba12c", 00:16:46.992 "assigned_rate_limits": { 00:16:46.992 "rw_ios_per_sec": 0, 00:16:46.992 "rw_mbytes_per_sec": 0, 00:16:46.992 "r_mbytes_per_sec": 0, 00:16:46.992 "w_mbytes_per_sec": 0 00:16:46.992 }, 00:16:46.992 "claimed": false, 00:16:46.992 "zoned": false, 00:16:46.992 "supported_io_types": { 00:16:46.992 "read": true, 00:16:46.992 "write": true, 00:16:46.992 "unmap": true, 00:16:46.992 "write_zeroes": true, 00:16:46.992 "flush": true, 00:16:46.992 "reset": true, 00:16:46.992 "compare": false, 00:16:46.992 "compare_and_write": false, 00:16:46.992 "abort": true, 00:16:46.992 "nvme_admin": false, 00:16:46.992 "nvme_io": false 00:16:46.992 }, 00:16:46.992 "memory_domains": [ 00:16:46.992 { 00:16:46.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:46.992 "dma_device_type": 2 00:16:46.992 } 00:16:46.992 ], 00:16:46.992 "driver_specific": {} 00:16:46.992 } 00:16:46.992 ] 00:16:46.992 06:09:17 -- common/autotest_common.sh@895 -- # return 0 00:16:46.992 06:09:17 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:47.251 [2024-06-11 06:09:17.650499] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.251 [2024-06-11 06:09:17.652929] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:47.251 [2024-06-11 06:09:17.653087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:47.251 [2024-06-11 06:09:17.653174] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:47.251 [2024-06-11 06:09:17.653233] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.251 "name": "Existed_Raid", 00:16:47.251 "uuid": "a3b8b3f3-0b2e-4b47-82f8-a36da92d1739", 00:16:47.251 "strip_size_kb": 64, 00:16:47.251 "state": "configuring", 00:16:47.251 "raid_level": "raid0", 00:16:47.251 "superblock": true, 00:16:47.251 "num_base_bdevs": 3, 00:16:47.251 "num_base_bdevs_discovered": 1, 00:16:47.251 "num_base_bdevs_operational": 3, 00:16:47.251 "base_bdevs_list": [ 00:16:47.251 { 00:16:47.251 "name": "BaseBdev1", 00:16:47.251 "uuid": "8ebadb96-8711-428f-bb25-ed9ab82ba12c", 00:16:47.251 "is_configured": true, 00:16:47.251 "data_offset": 2048, 00:16:47.251 "data_size": 63488 00:16:47.251 }, 00:16:47.251 { 00:16:47.251 "name": "BaseBdev2", 00:16:47.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.251 "is_configured": false, 00:16:47.251 "data_offset": 0, 00:16:47.251 "data_size": 0 00:16:47.251 }, 00:16:47.251 { 00:16:47.251 "name": "BaseBdev3", 00:16:47.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.251 "is_configured": false, 00:16:47.251 "data_offset": 0, 00:16:47.251 "data_size": 0 00:16:47.251 } 00:16:47.251 ] 00:16:47.251 }' 00:16:47.251 06:09:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.251 06:09:17 -- common/autotest_common.sh@10 -- # set +x 00:16:47.819 06:09:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:48.078 [2024-06-11 06:09:18.622738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:48.078 BaseBdev2 00:16:48.078 06:09:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:48.078 06:09:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:48.078 06:09:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:48.078 06:09:18 -- common/autotest_common.sh@889 -- # local i 00:16:48.078 06:09:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:48.078 06:09:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:48.078 06:09:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.337 06:09:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:48.596 [ 00:16:48.596 { 00:16:48.596 "name": "BaseBdev2", 00:16:48.596 "aliases": [ 00:16:48.596 "d57d9e5f-fb57-4519-b108-3882eccc1b48" 00:16:48.596 ], 00:16:48.596 "product_name": "Malloc disk", 00:16:48.596 "block_size": 512, 00:16:48.596 "num_blocks": 65536, 00:16:48.596 "uuid": "d57d9e5f-fb57-4519-b108-3882eccc1b48", 00:16:48.596 "assigned_rate_limits": { 00:16:48.596 "rw_ios_per_sec": 0, 00:16:48.596 "rw_mbytes_per_sec": 0, 00:16:48.596 "r_mbytes_per_sec": 0, 00:16:48.596 "w_mbytes_per_sec": 0 00:16:48.596 }, 00:16:48.596 "claimed": true, 00:16:48.596 "claim_type": "exclusive_write", 00:16:48.596 "zoned": false, 00:16:48.596 "supported_io_types": { 00:16:48.596 "read": true, 00:16:48.596 "write": true, 00:16:48.596 "unmap": true, 00:16:48.596 "write_zeroes": true, 00:16:48.596 "flush": true, 00:16:48.596 "reset": true, 00:16:48.596 "compare": false, 00:16:48.596 "compare_and_write": false, 00:16:48.596 "abort": true, 00:16:48.596 "nvme_admin": false, 00:16:48.596 "nvme_io": false 00:16:48.596 }, 00:16:48.596 "memory_domains": [ 00:16:48.596 { 00:16:48.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.596 "dma_device_type": 2 00:16:48.596 } 00:16:48.596 ], 00:16:48.596 "driver_specific": {} 00:16:48.596 } 00:16:48.596 ] 00:16:48.596 06:09:19 -- common/autotest_common.sh@895 -- # return 0 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.596 06:09:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.855 06:09:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.855 "name": "Existed_Raid", 00:16:48.855 "uuid": "a3b8b3f3-0b2e-4b47-82f8-a36da92d1739", 00:16:48.855 "strip_size_kb": 64, 00:16:48.855 "state": "configuring", 00:16:48.855 "raid_level": "raid0", 00:16:48.855 "superblock": true, 00:16:48.855 "num_base_bdevs": 3, 00:16:48.855 "num_base_bdevs_discovered": 2, 00:16:48.855 "num_base_bdevs_operational": 3, 00:16:48.855 "base_bdevs_list": [ 00:16:48.855 { 00:16:48.855 "name": "BaseBdev1", 00:16:48.855 "uuid": "8ebadb96-8711-428f-bb25-ed9ab82ba12c", 00:16:48.855 "is_configured": true, 00:16:48.855 "data_offset": 2048, 00:16:48.855 "data_size": 63488 00:16:48.855 }, 00:16:48.855 { 00:16:48.855 "name": "BaseBdev2", 00:16:48.855 "uuid": "d57d9e5f-fb57-4519-b108-3882eccc1b48", 00:16:48.855 "is_configured": true, 00:16:48.855 "data_offset": 2048, 00:16:48.855 "data_size": 63488 00:16:48.855 }, 00:16:48.855 { 00:16:48.855 "name": "BaseBdev3", 00:16:48.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.855 "is_configured": false, 00:16:48.855 "data_offset": 0, 00:16:48.855 "data_size": 0 00:16:48.855 } 00:16:48.855 ] 00:16:48.855 }' 00:16:48.855 06:09:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.855 06:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:49.422 06:09:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:49.681 [2024-06-11 06:09:20.116984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:49.681 [2024-06-11 06:09:20.117513] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:49.681 [2024-06-11 06:09:20.117637] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:49.681 [2024-06-11 06:09:20.117903] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:49.681 [2024-06-11 06:09:20.118368] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:49.681 [2024-06-11 06:09:20.118420] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:49.681 [2024-06-11 06:09:20.118681] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.681 BaseBdev3 00:16:49.681 06:09:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:49.681 06:09:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:49.681 06:09:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:49.681 06:09:20 -- common/autotest_common.sh@889 -- # local i 00:16:49.681 06:09:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:49.681 06:09:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:49.681 06:09:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:49.940 06:09:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:49.940 [ 00:16:49.940 { 00:16:49.940 "name": "BaseBdev3", 00:16:49.940 "aliases": [ 00:16:49.940 "94d1b22d-2a55-4b88-b325-2841051c3219" 00:16:49.940 ], 00:16:49.940 "product_name": "Malloc disk", 00:16:49.940 "block_size": 512, 00:16:49.940 "num_blocks": 65536, 00:16:49.940 "uuid": "94d1b22d-2a55-4b88-b325-2841051c3219", 00:16:49.940 "assigned_rate_limits": { 00:16:49.940 "rw_ios_per_sec": 0, 00:16:49.940 "rw_mbytes_per_sec": 0, 00:16:49.940 "r_mbytes_per_sec": 0, 00:16:49.940 "w_mbytes_per_sec": 0 00:16:49.940 }, 00:16:49.940 "claimed": true, 00:16:49.940 "claim_type": "exclusive_write", 00:16:49.940 "zoned": false, 00:16:49.940 "supported_io_types": { 00:16:49.940 "read": true, 00:16:49.940 "write": true, 00:16:49.940 "unmap": true, 00:16:49.940 "write_zeroes": true, 00:16:49.940 "flush": true, 00:16:49.940 "reset": true, 00:16:49.940 "compare": false, 00:16:49.940 "compare_and_write": false, 00:16:49.940 "abort": true, 00:16:49.940 "nvme_admin": false, 00:16:49.940 "nvme_io": false 00:16:49.940 }, 00:16:49.940 "memory_domains": [ 00:16:49.940 { 00:16:49.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.940 "dma_device_type": 2 00:16:49.940 } 00:16:49.940 ], 00:16:49.940 "driver_specific": {} 00:16:49.940 } 00:16:49.940 ] 00:16:49.940 06:09:20 -- common/autotest_common.sh@895 -- # return 0 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.940 06:09:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.199 06:09:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.199 "name": "Existed_Raid", 00:16:50.199 "uuid": "a3b8b3f3-0b2e-4b47-82f8-a36da92d1739", 00:16:50.199 "strip_size_kb": 64, 00:16:50.199 "state": "online", 00:16:50.199 "raid_level": "raid0", 00:16:50.199 "superblock": true, 00:16:50.199 "num_base_bdevs": 3, 00:16:50.199 "num_base_bdevs_discovered": 3, 00:16:50.199 "num_base_bdevs_operational": 3, 00:16:50.199 "base_bdevs_list": [ 00:16:50.199 { 00:16:50.199 "name": "BaseBdev1", 00:16:50.199 "uuid": "8ebadb96-8711-428f-bb25-ed9ab82ba12c", 00:16:50.199 "is_configured": true, 00:16:50.199 "data_offset": 2048, 00:16:50.199 "data_size": 63488 00:16:50.199 }, 00:16:50.199 { 00:16:50.199 "name": "BaseBdev2", 00:16:50.199 "uuid": "d57d9e5f-fb57-4519-b108-3882eccc1b48", 00:16:50.199 "is_configured": true, 00:16:50.199 "data_offset": 2048, 00:16:50.199 "data_size": 63488 00:16:50.199 }, 00:16:50.199 { 00:16:50.199 "name": "BaseBdev3", 00:16:50.199 "uuid": "94d1b22d-2a55-4b88-b325-2841051c3219", 00:16:50.199 "is_configured": true, 00:16:50.199 "data_offset": 2048, 00:16:50.199 "data_size": 63488 00:16:50.199 } 00:16:50.199 ] 00:16:50.199 }' 00:16:50.199 06:09:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.199 06:09:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.766 06:09:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:51.025 [2024-06-11 06:09:21.621375] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.025 [2024-06-11 06:09:21.621417] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.025 [2024-06-11 06:09:21.621495] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.284 "name": "Existed_Raid", 00:16:51.284 "uuid": "a3b8b3f3-0b2e-4b47-82f8-a36da92d1739", 00:16:51.284 "strip_size_kb": 64, 00:16:51.284 "state": "offline", 00:16:51.284 "raid_level": "raid0", 00:16:51.284 "superblock": true, 00:16:51.284 "num_base_bdevs": 3, 00:16:51.284 "num_base_bdevs_discovered": 2, 00:16:51.284 "num_base_bdevs_operational": 2, 00:16:51.284 "base_bdevs_list": [ 00:16:51.284 { 00:16:51.284 "name": null, 00:16:51.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.284 "is_configured": false, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "name": "BaseBdev2", 00:16:51.284 "uuid": "d57d9e5f-fb57-4519-b108-3882eccc1b48", 00:16:51.284 "is_configured": true, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 }, 00:16:51.284 { 00:16:51.284 "name": "BaseBdev3", 00:16:51.284 "uuid": "94d1b22d-2a55-4b88-b325-2841051c3219", 00:16:51.284 "is_configured": true, 00:16:51.284 "data_offset": 2048, 00:16:51.284 "data_size": 63488 00:16:51.284 } 00:16:51.284 ] 00:16:51.284 }' 00:16:51.284 06:09:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.284 06:09:21 -- common/autotest_common.sh@10 -- # set +x 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.221 06:09:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:52.480 [2024-06-11 06:09:22.941372] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.480 06:09:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:52.480 06:09:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:52.480 06:09:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.480 06:09:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:52.739 06:09:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:52.739 06:09:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.739 06:09:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:52.998 [2024-06-11 06:09:23.466372] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:52.999 [2024-06-11 06:09:23.466464] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:52.999 06:09:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:52.999 06:09:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:52.999 06:09:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.999 06:09:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:53.258 06:09:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:53.258 06:09:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:53.258 06:09:23 -- bdev/bdev_raid.sh@287 -- # killprocess 115946 00:16:53.258 06:09:23 -- common/autotest_common.sh@926 -- # '[' -z 115946 ']' 00:16:53.258 06:09:23 -- common/autotest_common.sh@930 -- # kill -0 115946 00:16:53.258 06:09:23 -- common/autotest_common.sh@931 -- # uname 00:16:53.258 06:09:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:53.258 06:09:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115946 00:16:53.258 06:09:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:53.258 06:09:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:53.258 06:09:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115946' 00:16:53.258 killing process with pid 115946 00:16:53.258 06:09:23 -- common/autotest_common.sh@945 -- # kill 115946 00:16:53.258 [2024-06-11 06:09:23.801669] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.258 [2024-06-11 06:09:23.801804] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.258 06:09:23 -- common/autotest_common.sh@950 -- # wait 115946 00:16:54.635 ************************************ 00:16:54.635 END TEST raid_state_function_test_sb 00:16:54.635 ************************************ 00:16:54.635 06:09:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:54.635 00:16:54.635 real 0m12.655s 00:16:54.635 user 0m21.083s 00:16:54.635 sys 0m2.015s 00:16:54.635 06:09:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.635 06:09:25 -- common/autotest_common.sh@10 -- # set +x 00:16:54.635 06:09:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:54.635 06:09:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:54.635 06:09:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:54.635 06:09:25 -- common/autotest_common.sh@10 -- # set +x 00:16:54.635 ************************************ 00:16:54.635 START TEST raid_superblock_test 00:16:54.635 ************************************ 00:16:54.635 06:09:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:54.635 06:09:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=116338 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:54.636 06:09:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116338 /var/tmp/spdk-raid.sock 00:16:54.636 06:09:25 -- common/autotest_common.sh@819 -- # '[' -z 116338 ']' 00:16:54.636 06:09:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.636 06:09:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.636 06:09:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.636 06:09:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.636 06:09:25 -- common/autotest_common.sh@10 -- # set +x 00:16:54.895 [2024-06-11 06:09:25.314391] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:16:54.895 [2024-06-11 06:09:25.314600] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116338 ] 00:16:54.895 [2024-06-11 06:09:25.501986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.154 [2024-06-11 06:09:25.776099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.413 [2024-06-11 06:09:26.018609] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.672 06:09:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.672 06:09:26 -- common/autotest_common.sh@852 -- # return 0 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:55.672 06:09:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:55.930 malloc1 00:16:55.930 06:09:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:56.189 [2024-06-11 06:09:26.665525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:56.189 [2024-06-11 06:09:26.665646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.189 [2024-06-11 06:09:26.665688] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:16:56.189 [2024-06-11 06:09:26.665741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.189 [2024-06-11 06:09:26.668409] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.189 [2024-06-11 06:09:26.668458] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:56.189 pt1 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.189 06:09:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:56.448 malloc2 00:16:56.448 06:09:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:56.448 [2024-06-11 06:09:27.074742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:56.448 [2024-06-11 06:09:27.074834] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.448 [2024-06-11 06:09:27.074878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:16:56.448 [2024-06-11 06:09:27.074938] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.448 [2024-06-11 06:09:27.077578] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.448 [2024-06-11 06:09:27.077652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:56.448 pt2 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:56.448 06:09:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:56.707 06:09:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:56.707 06:09:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:56.707 malloc3 00:16:56.707 06:09:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.965 [2024-06-11 06:09:27.533550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.966 [2024-06-11 06:09:27.533645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.966 [2024-06-11 06:09:27.533709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:56.966 [2024-06-11 06:09:27.533755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.966 [2024-06-11 06:09:27.536314] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.966 [2024-06-11 06:09:27.536366] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.966 pt3 00:16:56.966 06:09:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:56.966 06:09:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:56.966 06:09:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:57.226 [2024-06-11 06:09:27.705630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:57.226 [2024-06-11 06:09:27.707863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:57.226 [2024-06-11 06:09:27.707924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:57.226 [2024-06-11 06:09:27.708101] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:57.226 [2024-06-11 06:09:27.708111] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:57.226 [2024-06-11 06:09:27.708273] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:57.226 [2024-06-11 06:09:27.708626] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:57.226 [2024-06-11 06:09:27.708645] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:57.226 [2024-06-11 06:09:27.708793] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.226 06:09:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.535 06:09:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.535 "name": "raid_bdev1", 00:16:57.535 "uuid": "5ba7853c-191e-416b-873b-82c6d5fe8c49", 00:16:57.535 "strip_size_kb": 64, 00:16:57.535 "state": "online", 00:16:57.535 "raid_level": "raid0", 00:16:57.535 "superblock": true, 00:16:57.535 "num_base_bdevs": 3, 00:16:57.535 "num_base_bdevs_discovered": 3, 00:16:57.535 "num_base_bdevs_operational": 3, 00:16:57.535 "base_bdevs_list": [ 00:16:57.535 { 00:16:57.535 "name": "pt1", 00:16:57.536 "uuid": "0bc95635-228d-5380-ba3a-ec98bf61e34d", 00:16:57.536 "is_configured": true, 00:16:57.536 "data_offset": 2048, 00:16:57.536 "data_size": 63488 00:16:57.536 }, 00:16:57.536 { 00:16:57.536 "name": "pt2", 00:16:57.536 "uuid": "89f2bd76-49e8-5d92-af7e-03f6ac4721ca", 00:16:57.536 "is_configured": true, 00:16:57.536 "data_offset": 2048, 00:16:57.536 "data_size": 63488 00:16:57.536 }, 00:16:57.536 { 00:16:57.536 "name": "pt3", 00:16:57.536 "uuid": "cd2f9498-3e43-5eb4-bf99-fa0700245416", 00:16:57.536 "is_configured": true, 00:16:57.536 "data_offset": 2048, 00:16:57.536 "data_size": 63488 00:16:57.536 } 00:16:57.536 ] 00:16:57.536 }' 00:16:57.536 06:09:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.536 06:09:27 -- common/autotest_common.sh@10 -- # set +x 00:16:58.106 06:09:28 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:58.106 06:09:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:58.106 [2024-06-11 06:09:28.653936] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.106 06:09:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5ba7853c-191e-416b-873b-82c6d5fe8c49 00:16:58.106 06:09:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 5ba7853c-191e-416b-873b-82c6d5fe8c49 ']' 00:16:58.106 06:09:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.364 [2024-06-11 06:09:28.901764] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.364 [2024-06-11 06:09:28.901793] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.364 [2024-06-11 06:09:28.901901] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.365 [2024-06-11 06:09:28.901968] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.365 [2024-06-11 06:09:28.901977] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:58.365 06:09:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.365 06:09:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:58.623 06:09:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:58.623 06:09:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:58.623 06:09:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.623 06:09:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:58.624 06:09:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.624 06:09:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:58.882 06:09:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:58.882 06:09:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:59.141 06:09:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:59.141 06:09:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:59.399 06:09:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:59.399 06:09:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:59.399 06:09:29 -- common/autotest_common.sh@640 -- # local es=0 00:16:59.399 06:09:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:59.399 06:09:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.399 06:09:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.399 06:09:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.399 06:09:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.399 06:09:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.399 06:09:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:59.399 06:09:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.399 06:09:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:59.399 06:09:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:59.658 [2024-06-11 06:09:30.073980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:59.658 [2024-06-11 06:09:30.076259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:59.658 [2024-06-11 06:09:30.076307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:59.658 [2024-06-11 06:09:30.076355] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:59.658 [2024-06-11 06:09:30.076449] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:59.658 [2024-06-11 06:09:30.076488] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:59.658 [2024-06-11 06:09:30.076536] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.658 [2024-06-11 06:09:30.076546] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:16:59.658 request: 00:16:59.658 { 00:16:59.658 "name": "raid_bdev1", 00:16:59.658 "raid_level": "raid0", 00:16:59.658 "base_bdevs": [ 00:16:59.658 "malloc1", 00:16:59.658 "malloc2", 00:16:59.658 "malloc3" 00:16:59.658 ], 00:16:59.658 "superblock": false, 00:16:59.658 "strip_size_kb": 64, 00:16:59.658 "method": "bdev_raid_create", 00:16:59.658 "req_id": 1 00:16:59.658 } 00:16:59.658 Got JSON-RPC error response 00:16:59.658 response: 00:16:59.658 { 00:16:59.658 "code": -17, 00:16:59.658 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:59.658 } 00:16:59.658 06:09:30 -- common/autotest_common.sh@643 -- # es=1 00:16:59.658 06:09:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:59.658 06:09:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:59.658 06:09:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:59.658 06:09:30 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.658 06:09:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:59.917 06:09:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:59.917 06:09:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:59.917 06:09:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.176 [2024-06-11 06:09:30.577997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.176 [2024-06-11 06:09:30.578092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.176 [2024-06-11 06:09:30.578132] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:00.176 [2024-06-11 06:09:30.578156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.176 [2024-06-11 06:09:30.580769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.176 [2024-06-11 06:09:30.580860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.176 [2024-06-11 06:09:30.581000] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:00.176 [2024-06-11 06:09:30.581065] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.176 pt1 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.176 "name": "raid_bdev1", 00:17:00.176 "uuid": "5ba7853c-191e-416b-873b-82c6d5fe8c49", 00:17:00.176 "strip_size_kb": 64, 00:17:00.176 "state": "configuring", 00:17:00.176 "raid_level": "raid0", 00:17:00.176 "superblock": true, 00:17:00.176 "num_base_bdevs": 3, 00:17:00.176 "num_base_bdevs_discovered": 1, 00:17:00.176 "num_base_bdevs_operational": 3, 00:17:00.176 "base_bdevs_list": [ 00:17:00.176 { 00:17:00.176 "name": "pt1", 00:17:00.176 "uuid": "0bc95635-228d-5380-ba3a-ec98bf61e34d", 00:17:00.176 "is_configured": true, 00:17:00.176 "data_offset": 2048, 00:17:00.176 "data_size": 63488 00:17:00.176 }, 00:17:00.176 { 00:17:00.176 "name": null, 00:17:00.176 "uuid": "89f2bd76-49e8-5d92-af7e-03f6ac4721ca", 00:17:00.176 "is_configured": false, 00:17:00.176 "data_offset": 2048, 00:17:00.176 "data_size": 63488 00:17:00.176 }, 00:17:00.176 { 00:17:00.176 "name": null, 00:17:00.176 "uuid": "cd2f9498-3e43-5eb4-bf99-fa0700245416", 00:17:00.176 "is_configured": false, 00:17:00.176 "data_offset": 2048, 00:17:00.176 "data_size": 63488 00:17:00.176 } 00:17:00.176 ] 00:17:00.176 }' 00:17:00.176 06:09:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.176 06:09:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.743 06:09:31 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:00.743 06:09:31 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:01.001 [2024-06-11 06:09:31.390141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:01.001 [2024-06-11 06:09:31.390239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.001 [2024-06-11 06:09:31.390289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:01.001 [2024-06-11 06:09:31.390312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.001 [2024-06-11 06:09:31.390815] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.001 [2024-06-11 06:09:31.390851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:01.001 [2024-06-11 06:09:31.390993] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:01.001 [2024-06-11 06:09:31.391015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:01.001 pt2 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:01.001 [2024-06-11 06:09:31.562200] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.001 06:09:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.260 06:09:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.260 "name": "raid_bdev1", 00:17:01.260 "uuid": "5ba7853c-191e-416b-873b-82c6d5fe8c49", 00:17:01.260 "strip_size_kb": 64, 00:17:01.260 "state": "configuring", 00:17:01.260 "raid_level": "raid0", 00:17:01.260 "superblock": true, 00:17:01.260 "num_base_bdevs": 3, 00:17:01.260 "num_base_bdevs_discovered": 1, 00:17:01.260 "num_base_bdevs_operational": 3, 00:17:01.260 "base_bdevs_list": [ 00:17:01.260 { 00:17:01.260 "name": "pt1", 00:17:01.260 "uuid": "0bc95635-228d-5380-ba3a-ec98bf61e34d", 00:17:01.260 "is_configured": true, 00:17:01.260 "data_offset": 2048, 00:17:01.260 "data_size": 63488 00:17:01.260 }, 00:17:01.260 { 00:17:01.260 "name": null, 00:17:01.260 "uuid": "89f2bd76-49e8-5d92-af7e-03f6ac4721ca", 00:17:01.260 "is_configured": false, 00:17:01.260 "data_offset": 2048, 00:17:01.260 "data_size": 63488 00:17:01.260 }, 00:17:01.260 { 00:17:01.260 "name": null, 00:17:01.260 "uuid": "cd2f9498-3e43-5eb4-bf99-fa0700245416", 00:17:01.260 "is_configured": false, 00:17:01.260 "data_offset": 2048, 00:17:01.260 "data_size": 63488 00:17:01.260 } 00:17:01.260 ] 00:17:01.260 }' 00:17:01.260 06:09:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.260 06:09:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.827 06:09:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:01.827 06:09:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:01.827 06:09:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.085 [2024-06-11 06:09:32.662358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.085 [2024-06-11 06:09:32.662481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.085 [2024-06-11 06:09:32.662524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:02.085 [2024-06-11 06:09:32.662553] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.085 [2024-06-11 06:09:32.663065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.085 [2024-06-11 06:09:32.663115] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.085 [2024-06-11 06:09:32.663255] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:02.085 [2024-06-11 06:09:32.663277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.085 pt2 00:17:02.085 06:09:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:02.085 06:09:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:02.085 06:09:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:02.343 [2024-06-11 06:09:32.834398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:02.343 [2024-06-11 06:09:32.834490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.343 [2024-06-11 06:09:32.834528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:02.343 [2024-06-11 06:09:32.834556] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.344 [2024-06-11 06:09:32.835037] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.344 [2024-06-11 06:09:32.835079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:02.344 [2024-06-11 06:09:32.835205] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:02.344 [2024-06-11 06:09:32.835224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:02.344 [2024-06-11 06:09:32.835340] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:02.344 [2024-06-11 06:09:32.835356] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:02.344 [2024-06-11 06:09:32.835471] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:02.344 [2024-06-11 06:09:32.835767] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:02.344 [2024-06-11 06:09:32.835785] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:02.344 [2024-06-11 06:09:32.835926] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.344 pt3 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.344 06:09:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.602 06:09:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.602 "name": "raid_bdev1", 00:17:02.602 "uuid": "5ba7853c-191e-416b-873b-82c6d5fe8c49", 00:17:02.602 "strip_size_kb": 64, 00:17:02.602 "state": "online", 00:17:02.602 "raid_level": "raid0", 00:17:02.602 "superblock": true, 00:17:02.602 "num_base_bdevs": 3, 00:17:02.602 "num_base_bdevs_discovered": 3, 00:17:02.602 "num_base_bdevs_operational": 3, 00:17:02.602 "base_bdevs_list": [ 00:17:02.602 { 00:17:02.602 "name": "pt1", 00:17:02.602 "uuid": "0bc95635-228d-5380-ba3a-ec98bf61e34d", 00:17:02.602 "is_configured": true, 00:17:02.602 "data_offset": 2048, 00:17:02.602 "data_size": 63488 00:17:02.602 }, 00:17:02.602 { 00:17:02.602 "name": "pt2", 00:17:02.602 "uuid": "89f2bd76-49e8-5d92-af7e-03f6ac4721ca", 00:17:02.602 "is_configured": true, 00:17:02.602 "data_offset": 2048, 00:17:02.602 "data_size": 63488 00:17:02.602 }, 00:17:02.602 { 00:17:02.602 "name": "pt3", 00:17:02.602 "uuid": "cd2f9498-3e43-5eb4-bf99-fa0700245416", 00:17:02.602 "is_configured": true, 00:17:02.602 "data_offset": 2048, 00:17:02.602 "data_size": 63488 00:17:02.602 } 00:17:02.602 ] 00:17:02.602 }' 00:17:02.602 06:09:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.602 06:09:33 -- common/autotest_common.sh@10 -- # set +x 00:17:03.169 06:09:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:03.169 06:09:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:03.427 [2024-06-11 06:09:33.850799] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.427 06:09:33 -- bdev/bdev_raid.sh@430 -- # '[' 5ba7853c-191e-416b-873b-82c6d5fe8c49 '!=' 5ba7853c-191e-416b-873b-82c6d5fe8c49 ']' 00:17:03.427 06:09:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:03.427 06:09:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:03.427 06:09:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:03.427 06:09:33 -- bdev/bdev_raid.sh@511 -- # killprocess 116338 00:17:03.427 06:09:33 -- common/autotest_common.sh@926 -- # '[' -z 116338 ']' 00:17:03.427 06:09:33 -- common/autotest_common.sh@930 -- # kill -0 116338 00:17:03.427 06:09:33 -- common/autotest_common.sh@931 -- # uname 00:17:03.427 06:09:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.427 06:09:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116338 00:17:03.427 06:09:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.427 06:09:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.427 killing process with pid 116338 00:17:03.427 06:09:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116338' 00:17:03.427 06:09:33 -- common/autotest_common.sh@945 -- # kill 116338 00:17:03.427 [2024-06-11 06:09:33.901978] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.427 [2024-06-11 06:09:33.902063] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.427 06:09:33 -- common/autotest_common.sh@950 -- # wait 116338 00:17:03.427 [2024-06-11 06:09:33.902129] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.427 [2024-06-11 06:09:33.902138] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:03.686 [2024-06-11 06:09:34.204318] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:05.064 00:17:05.064 real 0m10.341s 00:17:05.064 user 0m16.893s 00:17:05.064 sys 0m1.762s 00:17:05.064 06:09:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:05.064 ************************************ 00:17:05.064 06:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:05.064 END TEST raid_superblock_test 00:17:05.064 ************************************ 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:05.064 06:09:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:05.064 06:09:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:05.064 06:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:05.064 ************************************ 00:17:05.064 START TEST raid_state_function_test 00:17:05.064 ************************************ 00:17:05.064 06:09:35 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@226 -- # raid_pid=116643 00:17:05.064 Process raid pid: 116643 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116643' 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116643 /var/tmp/spdk-raid.sock 00:17:05.064 06:09:35 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:05.064 06:09:35 -- common/autotest_common.sh@819 -- # '[' -z 116643 ']' 00:17:05.064 06:09:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:05.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:05.064 06:09:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.064 06:09:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:05.064 06:09:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.064 06:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:05.323 [2024-06-11 06:09:35.730056] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:05.323 [2024-06-11 06:09:35.731067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.323 [2024-06-11 06:09:35.914776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.582 [2024-06-11 06:09:36.152788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.841 [2024-06-11 06:09:36.396138] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.100 06:09:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.100 06:09:36 -- common/autotest_common.sh@852 -- # return 0 00:17:06.100 06:09:36 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:06.360 [2024-06-11 06:09:36.849587] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.360 [2024-06-11 06:09:36.849693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.360 [2024-06-11 06:09:36.849705] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.360 [2024-06-11 06:09:36.849725] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.360 [2024-06-11 06:09:36.849732] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.360 [2024-06-11 06:09:36.849776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.360 06:09:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.619 06:09:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:06.619 "name": "Existed_Raid", 00:17:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.619 "strip_size_kb": 64, 00:17:06.619 "state": "configuring", 00:17:06.619 "raid_level": "concat", 00:17:06.619 "superblock": false, 00:17:06.619 "num_base_bdevs": 3, 00:17:06.619 "num_base_bdevs_discovered": 0, 00:17:06.619 "num_base_bdevs_operational": 3, 00:17:06.619 "base_bdevs_list": [ 00:17:06.619 { 00:17:06.619 "name": "BaseBdev1", 00:17:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.619 "is_configured": false, 00:17:06.619 "data_offset": 0, 00:17:06.619 "data_size": 0 00:17:06.619 }, 00:17:06.619 { 00:17:06.619 "name": "BaseBdev2", 00:17:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.619 "is_configured": false, 00:17:06.619 "data_offset": 0, 00:17:06.619 "data_size": 0 00:17:06.619 }, 00:17:06.619 { 00:17:06.619 "name": "BaseBdev3", 00:17:06.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.619 "is_configured": false, 00:17:06.619 "data_offset": 0, 00:17:06.619 "data_size": 0 00:17:06.619 } 00:17:06.619 ] 00:17:06.619 }' 00:17:06.619 06:09:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:06.619 06:09:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.187 06:09:37 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:07.187 [2024-06-11 06:09:37.813640] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.187 [2024-06-11 06:09:37.813681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:07.446 06:09:37 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.446 [2024-06-11 06:09:37.981683] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.446 [2024-06-11 06:09:37.981746] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.446 [2024-06-11 06:09:37.981757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.446 [2024-06-11 06:09:37.981800] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.446 [2024-06-11 06:09:37.981807] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.446 [2024-06-11 06:09:37.981834] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.446 06:09:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.706 [2024-06-11 06:09:38.182513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.706 BaseBdev1 00:17:07.706 06:09:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:07.706 06:09:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:07.706 06:09:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:07.706 06:09:38 -- common/autotest_common.sh@889 -- # local i 00:17:07.706 06:09:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:07.706 06:09:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:07.706 06:09:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.965 06:09:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.965 [ 00:17:07.965 { 00:17:07.965 "name": "BaseBdev1", 00:17:07.965 "aliases": [ 00:17:07.965 "225e0947-ded5-4ac2-bee5-e273320b2e00" 00:17:07.965 ], 00:17:07.965 "product_name": "Malloc disk", 00:17:07.965 "block_size": 512, 00:17:07.965 "num_blocks": 65536, 00:17:07.965 "uuid": "225e0947-ded5-4ac2-bee5-e273320b2e00", 00:17:07.965 "assigned_rate_limits": { 00:17:07.965 "rw_ios_per_sec": 0, 00:17:07.965 "rw_mbytes_per_sec": 0, 00:17:07.965 "r_mbytes_per_sec": 0, 00:17:07.965 "w_mbytes_per_sec": 0 00:17:07.965 }, 00:17:07.965 "claimed": true, 00:17:07.965 "claim_type": "exclusive_write", 00:17:07.965 "zoned": false, 00:17:07.965 "supported_io_types": { 00:17:07.965 "read": true, 00:17:07.965 "write": true, 00:17:07.965 "unmap": true, 00:17:07.965 "write_zeroes": true, 00:17:07.965 "flush": true, 00:17:07.965 "reset": true, 00:17:07.965 "compare": false, 00:17:07.965 "compare_and_write": false, 00:17:07.965 "abort": true, 00:17:07.965 "nvme_admin": false, 00:17:07.965 "nvme_io": false 00:17:07.965 }, 00:17:07.965 "memory_domains": [ 00:17:07.965 { 00:17:07.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.965 "dma_device_type": 2 00:17:07.965 } 00:17:07.965 ], 00:17:07.965 "driver_specific": {} 00:17:07.965 } 00:17:07.965 ] 00:17:08.224 06:09:38 -- common/autotest_common.sh@895 -- # return 0 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.224 "name": "Existed_Raid", 00:17:08.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.224 "strip_size_kb": 64, 00:17:08.224 "state": "configuring", 00:17:08.224 "raid_level": "concat", 00:17:08.224 "superblock": false, 00:17:08.224 "num_base_bdevs": 3, 00:17:08.224 "num_base_bdevs_discovered": 1, 00:17:08.224 "num_base_bdevs_operational": 3, 00:17:08.224 "base_bdevs_list": [ 00:17:08.224 { 00:17:08.224 "name": "BaseBdev1", 00:17:08.224 "uuid": "225e0947-ded5-4ac2-bee5-e273320b2e00", 00:17:08.224 "is_configured": true, 00:17:08.224 "data_offset": 0, 00:17:08.224 "data_size": 65536 00:17:08.224 }, 00:17:08.224 { 00:17:08.224 "name": "BaseBdev2", 00:17:08.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.224 "is_configured": false, 00:17:08.224 "data_offset": 0, 00:17:08.224 "data_size": 0 00:17:08.224 }, 00:17:08.224 { 00:17:08.224 "name": "BaseBdev3", 00:17:08.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.224 "is_configured": false, 00:17:08.224 "data_offset": 0, 00:17:08.224 "data_size": 0 00:17:08.224 } 00:17:08.224 ] 00:17:08.224 }' 00:17:08.224 06:09:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.224 06:09:38 -- common/autotest_common.sh@10 -- # set +x 00:17:09.162 06:09:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:09.162 [2024-06-11 06:09:39.698829] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.162 [2024-06-11 06:09:39.698904] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:09.162 06:09:39 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:09.162 06:09:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:09.421 [2024-06-11 06:09:39.958963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.421 [2024-06-11 06:09:39.961267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.421 [2024-06-11 06:09:39.961326] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.421 [2024-06-11 06:09:39.961336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.421 [2024-06-11 06:09:39.961378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.421 06:09:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.680 06:09:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.680 "name": "Existed_Raid", 00:17:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.680 "strip_size_kb": 64, 00:17:09.680 "state": "configuring", 00:17:09.680 "raid_level": "concat", 00:17:09.680 "superblock": false, 00:17:09.680 "num_base_bdevs": 3, 00:17:09.680 "num_base_bdevs_discovered": 1, 00:17:09.680 "num_base_bdevs_operational": 3, 00:17:09.680 "base_bdevs_list": [ 00:17:09.680 { 00:17:09.680 "name": "BaseBdev1", 00:17:09.680 "uuid": "225e0947-ded5-4ac2-bee5-e273320b2e00", 00:17:09.680 "is_configured": true, 00:17:09.680 "data_offset": 0, 00:17:09.680 "data_size": 65536 00:17:09.680 }, 00:17:09.680 { 00:17:09.680 "name": "BaseBdev2", 00:17:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.680 "is_configured": false, 00:17:09.680 "data_offset": 0, 00:17:09.680 "data_size": 0 00:17:09.680 }, 00:17:09.680 { 00:17:09.680 "name": "BaseBdev3", 00:17:09.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.680 "is_configured": false, 00:17:09.680 "data_offset": 0, 00:17:09.680 "data_size": 0 00:17:09.680 } 00:17:09.680 ] 00:17:09.680 }' 00:17:09.680 06:09:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.680 06:09:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.248 06:09:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.507 [2024-06-11 06:09:41.064989] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.507 BaseBdev2 00:17:10.507 06:09:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:10.507 06:09:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:10.507 06:09:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:10.507 06:09:41 -- common/autotest_common.sh@889 -- # local i 00:17:10.507 06:09:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:10.507 06:09:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:10.507 06:09:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.766 06:09:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:11.026 [ 00:17:11.026 { 00:17:11.026 "name": "BaseBdev2", 00:17:11.026 "aliases": [ 00:17:11.026 "06dfba86-ef26-451a-a544-d461a4212155" 00:17:11.026 ], 00:17:11.026 "product_name": "Malloc disk", 00:17:11.026 "block_size": 512, 00:17:11.026 "num_blocks": 65536, 00:17:11.026 "uuid": "06dfba86-ef26-451a-a544-d461a4212155", 00:17:11.026 "assigned_rate_limits": { 00:17:11.026 "rw_ios_per_sec": 0, 00:17:11.026 "rw_mbytes_per_sec": 0, 00:17:11.026 "r_mbytes_per_sec": 0, 00:17:11.026 "w_mbytes_per_sec": 0 00:17:11.026 }, 00:17:11.026 "claimed": true, 00:17:11.026 "claim_type": "exclusive_write", 00:17:11.026 "zoned": false, 00:17:11.026 "supported_io_types": { 00:17:11.026 "read": true, 00:17:11.026 "write": true, 00:17:11.026 "unmap": true, 00:17:11.026 "write_zeroes": true, 00:17:11.026 "flush": true, 00:17:11.026 "reset": true, 00:17:11.026 "compare": false, 00:17:11.026 "compare_and_write": false, 00:17:11.026 "abort": true, 00:17:11.026 "nvme_admin": false, 00:17:11.026 "nvme_io": false 00:17:11.026 }, 00:17:11.026 "memory_domains": [ 00:17:11.026 { 00:17:11.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.026 "dma_device_type": 2 00:17:11.026 } 00:17:11.026 ], 00:17:11.026 "driver_specific": {} 00:17:11.026 } 00:17:11.026 ] 00:17:11.026 06:09:41 -- common/autotest_common.sh@895 -- # return 0 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.026 06:09:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.286 06:09:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.286 "name": "Existed_Raid", 00:17:11.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.286 "strip_size_kb": 64, 00:17:11.286 "state": "configuring", 00:17:11.286 "raid_level": "concat", 00:17:11.286 "superblock": false, 00:17:11.286 "num_base_bdevs": 3, 00:17:11.286 "num_base_bdevs_discovered": 2, 00:17:11.286 "num_base_bdevs_operational": 3, 00:17:11.286 "base_bdevs_list": [ 00:17:11.286 { 00:17:11.286 "name": "BaseBdev1", 00:17:11.286 "uuid": "225e0947-ded5-4ac2-bee5-e273320b2e00", 00:17:11.286 "is_configured": true, 00:17:11.286 "data_offset": 0, 00:17:11.286 "data_size": 65536 00:17:11.286 }, 00:17:11.286 { 00:17:11.286 "name": "BaseBdev2", 00:17:11.286 "uuid": "06dfba86-ef26-451a-a544-d461a4212155", 00:17:11.286 "is_configured": true, 00:17:11.286 "data_offset": 0, 00:17:11.286 "data_size": 65536 00:17:11.286 }, 00:17:11.286 { 00:17:11.286 "name": "BaseBdev3", 00:17:11.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.286 "is_configured": false, 00:17:11.286 "data_offset": 0, 00:17:11.286 "data_size": 0 00:17:11.286 } 00:17:11.286 ] 00:17:11.286 }' 00:17:11.286 06:09:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.286 06:09:41 -- common/autotest_common.sh@10 -- # set +x 00:17:11.854 06:09:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:12.114 [2024-06-11 06:09:42.583664] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.114 [2024-06-11 06:09:42.583714] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:12.114 [2024-06-11 06:09:42.583722] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:12.114 [2024-06-11 06:09:42.583870] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:12.114 [2024-06-11 06:09:42.584223] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:12.114 [2024-06-11 06:09:42.584241] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:12.114 [2024-06-11 06:09:42.584490] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.114 BaseBdev3 00:17:12.114 06:09:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:12.114 06:09:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:12.114 06:09:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:12.114 06:09:42 -- common/autotest_common.sh@889 -- # local i 00:17:12.114 06:09:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:12.114 06:09:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:12.114 06:09:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.373 06:09:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:12.633 [ 00:17:12.633 { 00:17:12.633 "name": "BaseBdev3", 00:17:12.633 "aliases": [ 00:17:12.633 "85f70347-438e-4550-ac2b-12ca206ee2b7" 00:17:12.633 ], 00:17:12.633 "product_name": "Malloc disk", 00:17:12.633 "block_size": 512, 00:17:12.633 "num_blocks": 65536, 00:17:12.633 "uuid": "85f70347-438e-4550-ac2b-12ca206ee2b7", 00:17:12.633 "assigned_rate_limits": { 00:17:12.633 "rw_ios_per_sec": 0, 00:17:12.633 "rw_mbytes_per_sec": 0, 00:17:12.633 "r_mbytes_per_sec": 0, 00:17:12.633 "w_mbytes_per_sec": 0 00:17:12.633 }, 00:17:12.633 "claimed": true, 00:17:12.633 "claim_type": "exclusive_write", 00:17:12.633 "zoned": false, 00:17:12.633 "supported_io_types": { 00:17:12.633 "read": true, 00:17:12.633 "write": true, 00:17:12.633 "unmap": true, 00:17:12.633 "write_zeroes": true, 00:17:12.633 "flush": true, 00:17:12.633 "reset": true, 00:17:12.633 "compare": false, 00:17:12.633 "compare_and_write": false, 00:17:12.633 "abort": true, 00:17:12.633 "nvme_admin": false, 00:17:12.633 "nvme_io": false 00:17:12.633 }, 00:17:12.633 "memory_domains": [ 00:17:12.633 { 00:17:12.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.633 "dma_device_type": 2 00:17:12.633 } 00:17:12.633 ], 00:17:12.633 "driver_specific": {} 00:17:12.633 } 00:17:12.633 ] 00:17:12.633 06:09:43 -- common/autotest_common.sh@895 -- # return 0 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.633 06:09:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.892 06:09:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.892 "name": "Existed_Raid", 00:17:12.892 "uuid": "9cc62da2-5e55-4363-9b45-70535a73e54b", 00:17:12.892 "strip_size_kb": 64, 00:17:12.892 "state": "online", 00:17:12.892 "raid_level": "concat", 00:17:12.892 "superblock": false, 00:17:12.892 "num_base_bdevs": 3, 00:17:12.892 "num_base_bdevs_discovered": 3, 00:17:12.892 "num_base_bdevs_operational": 3, 00:17:12.892 "base_bdevs_list": [ 00:17:12.892 { 00:17:12.892 "name": "BaseBdev1", 00:17:12.892 "uuid": "225e0947-ded5-4ac2-bee5-e273320b2e00", 00:17:12.892 "is_configured": true, 00:17:12.892 "data_offset": 0, 00:17:12.892 "data_size": 65536 00:17:12.892 }, 00:17:12.892 { 00:17:12.892 "name": "BaseBdev2", 00:17:12.892 "uuid": "06dfba86-ef26-451a-a544-d461a4212155", 00:17:12.892 "is_configured": true, 00:17:12.892 "data_offset": 0, 00:17:12.892 "data_size": 65536 00:17:12.892 }, 00:17:12.892 { 00:17:12.892 "name": "BaseBdev3", 00:17:12.892 "uuid": "85f70347-438e-4550-ac2b-12ca206ee2b7", 00:17:12.892 "is_configured": true, 00:17:12.892 "data_offset": 0, 00:17:12.892 "data_size": 65536 00:17:12.892 } 00:17:12.892 ] 00:17:12.892 }' 00:17:12.892 06:09:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.892 06:09:43 -- common/autotest_common.sh@10 -- # set +x 00:17:13.461 06:09:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:13.461 [2024-06-11 06:09:44.008007] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.461 [2024-06-11 06:09:44.008056] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.461 [2024-06-11 06:09:44.008142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.721 "name": "Existed_Raid", 00:17:13.721 "uuid": "9cc62da2-5e55-4363-9b45-70535a73e54b", 00:17:13.721 "strip_size_kb": 64, 00:17:13.721 "state": "offline", 00:17:13.721 "raid_level": "concat", 00:17:13.721 "superblock": false, 00:17:13.721 "num_base_bdevs": 3, 00:17:13.721 "num_base_bdevs_discovered": 2, 00:17:13.721 "num_base_bdevs_operational": 2, 00:17:13.721 "base_bdevs_list": [ 00:17:13.721 { 00:17:13.721 "name": null, 00:17:13.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.721 "is_configured": false, 00:17:13.721 "data_offset": 0, 00:17:13.721 "data_size": 65536 00:17:13.721 }, 00:17:13.721 { 00:17:13.721 "name": "BaseBdev2", 00:17:13.721 "uuid": "06dfba86-ef26-451a-a544-d461a4212155", 00:17:13.721 "is_configured": true, 00:17:13.721 "data_offset": 0, 00:17:13.721 "data_size": 65536 00:17:13.721 }, 00:17:13.721 { 00:17:13.721 "name": "BaseBdev3", 00:17:13.721 "uuid": "85f70347-438e-4550-ac2b-12ca206ee2b7", 00:17:13.721 "is_configured": true, 00:17:13.721 "data_offset": 0, 00:17:13.721 "data_size": 65536 00:17:13.721 } 00:17:13.721 ] 00:17:13.721 }' 00:17:13.721 06:09:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.721 06:09:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.290 06:09:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:14.290 06:09:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:14.290 06:09:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.290 06:09:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:14.549 06:09:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:14.549 06:09:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:14.549 06:09:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:14.823 [2024-06-11 06:09:45.344377] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.146 06:09:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:15.422 [2024-06-11 06:09:45.869553] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:15.422 [2024-06-11 06:09:45.869623] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:15.422 06:09:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:15.422 06:09:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.422 06:09:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:15.422 06:09:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.681 06:09:46 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:15.681 06:09:46 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:15.681 06:09:46 -- bdev/bdev_raid.sh@287 -- # killprocess 116643 00:17:15.681 06:09:46 -- common/autotest_common.sh@926 -- # '[' -z 116643 ']' 00:17:15.681 06:09:46 -- common/autotest_common.sh@930 -- # kill -0 116643 00:17:15.681 06:09:46 -- common/autotest_common.sh@931 -- # uname 00:17:15.681 06:09:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.681 06:09:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116643 00:17:15.681 06:09:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.681 06:09:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.681 06:09:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116643' 00:17:15.681 killing process with pid 116643 00:17:15.681 06:09:46 -- common/autotest_common.sh@945 -- # kill 116643 00:17:15.681 [2024-06-11 06:09:46.203392] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:15.681 06:09:46 -- common/autotest_common.sh@950 -- # wait 116643 00:17:15.681 [2024-06-11 06:09:46.203544] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.061 ************************************ 00:17:17.061 END TEST raid_state_function_test 00:17:17.061 ************************************ 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:17.061 00:17:17.061 real 0m11.915s 00:17:17.061 user 0m19.866s 00:17:17.061 sys 0m2.007s 00:17:17.061 06:09:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.061 06:09:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:17.061 06:09:47 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:17.061 06:09:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:17.061 06:09:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.061 ************************************ 00:17:17.061 START TEST raid_state_function_test_sb 00:17:17.061 ************************************ 00:17:17.061 06:09:47 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=117019 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117019' 00:17:17.061 Process raid pid: 117019 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117019 /var/tmp/spdk-raid.sock 00:17:17.061 06:09:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:17.061 06:09:47 -- common/autotest_common.sh@819 -- # '[' -z 117019 ']' 00:17:17.061 06:09:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:17.061 06:09:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:17.061 06:09:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:17.061 06:09:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.061 06:09:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.321 [2024-06-11 06:09:47.724621] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:17.321 [2024-06-11 06:09:47.724836] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.321 [2024-06-11 06:09:47.908568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.580 [2024-06-11 06:09:48.139934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.839 [2024-06-11 06:09:48.385745] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.099 06:09:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.099 06:09:48 -- common/autotest_common.sh@852 -- # return 0 00:17:18.099 06:09:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:18.358 [2024-06-11 06:09:48.856858] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.358 [2024-06-11 06:09:48.856939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.358 [2024-06-11 06:09:48.856949] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.358 [2024-06-11 06:09:48.856968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.358 [2024-06-11 06:09:48.856974] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.358 [2024-06-11 06:09:48.857018] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.358 06:09:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.617 06:09:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.617 "name": "Existed_Raid", 00:17:18.617 "uuid": "7395eaf4-33f0-443b-9ff2-db692d32d0ac", 00:17:18.617 "strip_size_kb": 64, 00:17:18.617 "state": "configuring", 00:17:18.617 "raid_level": "concat", 00:17:18.617 "superblock": true, 00:17:18.617 "num_base_bdevs": 3, 00:17:18.617 "num_base_bdevs_discovered": 0, 00:17:18.617 "num_base_bdevs_operational": 3, 00:17:18.617 "base_bdevs_list": [ 00:17:18.617 { 00:17:18.617 "name": "BaseBdev1", 00:17:18.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.617 "is_configured": false, 00:17:18.617 "data_offset": 0, 00:17:18.617 "data_size": 0 00:17:18.617 }, 00:17:18.617 { 00:17:18.617 "name": "BaseBdev2", 00:17:18.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.617 "is_configured": false, 00:17:18.617 "data_offset": 0, 00:17:18.617 "data_size": 0 00:17:18.617 }, 00:17:18.617 { 00:17:18.617 "name": "BaseBdev3", 00:17:18.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.617 "is_configured": false, 00:17:18.617 "data_offset": 0, 00:17:18.617 "data_size": 0 00:17:18.617 } 00:17:18.617 ] 00:17:18.617 }' 00:17:18.617 06:09:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.617 06:09:49 -- common/autotest_common.sh@10 -- # set +x 00:17:19.185 06:09:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.185 [2024-06-11 06:09:49.784808] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.185 [2024-06-11 06:09:49.784856] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:19.185 06:09:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:19.444 [2024-06-11 06:09:49.948931] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.444 [2024-06-11 06:09:49.948999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.444 [2024-06-11 06:09:49.949009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.444 [2024-06-11 06:09:49.949035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.444 [2024-06-11 06:09:49.949043] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:19.444 [2024-06-11 06:09:49.949068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:19.444 06:09:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:19.703 [2024-06-11 06:09:50.231824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.703 BaseBdev1 00:17:19.703 06:09:50 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:19.703 06:09:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:19.703 06:09:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:19.703 06:09:50 -- common/autotest_common.sh@889 -- # local i 00:17:19.703 06:09:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:19.703 06:09:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:19.703 06:09:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:19.962 06:09:50 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:20.222 [ 00:17:20.222 { 00:17:20.222 "name": "BaseBdev1", 00:17:20.222 "aliases": [ 00:17:20.222 "e719220b-02b7-4211-98cb-53d1b2defffb" 00:17:20.222 ], 00:17:20.222 "product_name": "Malloc disk", 00:17:20.222 "block_size": 512, 00:17:20.222 "num_blocks": 65536, 00:17:20.222 "uuid": "e719220b-02b7-4211-98cb-53d1b2defffb", 00:17:20.222 "assigned_rate_limits": { 00:17:20.222 "rw_ios_per_sec": 0, 00:17:20.222 "rw_mbytes_per_sec": 0, 00:17:20.222 "r_mbytes_per_sec": 0, 00:17:20.222 "w_mbytes_per_sec": 0 00:17:20.222 }, 00:17:20.222 "claimed": true, 00:17:20.222 "claim_type": "exclusive_write", 00:17:20.222 "zoned": false, 00:17:20.222 "supported_io_types": { 00:17:20.222 "read": true, 00:17:20.222 "write": true, 00:17:20.222 "unmap": true, 00:17:20.222 "write_zeroes": true, 00:17:20.222 "flush": true, 00:17:20.222 "reset": true, 00:17:20.222 "compare": false, 00:17:20.222 "compare_and_write": false, 00:17:20.222 "abort": true, 00:17:20.222 "nvme_admin": false, 00:17:20.222 "nvme_io": false 00:17:20.222 }, 00:17:20.222 "memory_domains": [ 00:17:20.222 { 00:17:20.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.222 "dma_device_type": 2 00:17:20.222 } 00:17:20.222 ], 00:17:20.222 "driver_specific": {} 00:17:20.222 } 00:17:20.222 ] 00:17:20.222 06:09:50 -- common/autotest_common.sh@895 -- # return 0 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.222 06:09:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.481 06:09:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.481 "name": "Existed_Raid", 00:17:20.481 "uuid": "76ee748b-89b1-4946-9ef5-8648c881d00b", 00:17:20.481 "strip_size_kb": 64, 00:17:20.481 "state": "configuring", 00:17:20.481 "raid_level": "concat", 00:17:20.481 "superblock": true, 00:17:20.481 "num_base_bdevs": 3, 00:17:20.481 "num_base_bdevs_discovered": 1, 00:17:20.481 "num_base_bdevs_operational": 3, 00:17:20.481 "base_bdevs_list": [ 00:17:20.481 { 00:17:20.481 "name": "BaseBdev1", 00:17:20.481 "uuid": "e719220b-02b7-4211-98cb-53d1b2defffb", 00:17:20.481 "is_configured": true, 00:17:20.481 "data_offset": 2048, 00:17:20.481 "data_size": 63488 00:17:20.481 }, 00:17:20.481 { 00:17:20.481 "name": "BaseBdev2", 00:17:20.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.481 "is_configured": false, 00:17:20.481 "data_offset": 0, 00:17:20.481 "data_size": 0 00:17:20.481 }, 00:17:20.481 { 00:17:20.481 "name": "BaseBdev3", 00:17:20.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.481 "is_configured": false, 00:17:20.481 "data_offset": 0, 00:17:20.481 "data_size": 0 00:17:20.481 } 00:17:20.481 ] 00:17:20.481 }' 00:17:20.481 06:09:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.481 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:17:21.049 06:09:51 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:21.049 [2024-06-11 06:09:51.584061] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:21.049 [2024-06-11 06:09:51.584124] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:21.049 06:09:51 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:21.049 06:09:51 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.308 06:09:51 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.568 BaseBdev1 00:17:21.568 06:09:52 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:21.568 06:09:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:21.568 06:09:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:21.568 06:09:52 -- common/autotest_common.sh@889 -- # local i 00:17:21.568 06:09:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:21.568 06:09:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:21.568 06:09:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.827 06:09:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.086 [ 00:17:22.086 { 00:17:22.086 "name": "BaseBdev1", 00:17:22.086 "aliases": [ 00:17:22.086 "e58ee8fc-7e11-4684-a6d0-11fc69f5138c" 00:17:22.086 ], 00:17:22.086 "product_name": "Malloc disk", 00:17:22.086 "block_size": 512, 00:17:22.086 "num_blocks": 65536, 00:17:22.086 "uuid": "e58ee8fc-7e11-4684-a6d0-11fc69f5138c", 00:17:22.086 "assigned_rate_limits": { 00:17:22.086 "rw_ios_per_sec": 0, 00:17:22.086 "rw_mbytes_per_sec": 0, 00:17:22.086 "r_mbytes_per_sec": 0, 00:17:22.086 "w_mbytes_per_sec": 0 00:17:22.086 }, 00:17:22.086 "claimed": false, 00:17:22.086 "zoned": false, 00:17:22.086 "supported_io_types": { 00:17:22.086 "read": true, 00:17:22.086 "write": true, 00:17:22.086 "unmap": true, 00:17:22.086 "write_zeroes": true, 00:17:22.086 "flush": true, 00:17:22.086 "reset": true, 00:17:22.086 "compare": false, 00:17:22.086 "compare_and_write": false, 00:17:22.086 "abort": true, 00:17:22.086 "nvme_admin": false, 00:17:22.086 "nvme_io": false 00:17:22.086 }, 00:17:22.086 "memory_domains": [ 00:17:22.086 { 00:17:22.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.086 "dma_device_type": 2 00:17:22.086 } 00:17:22.086 ], 00:17:22.086 "driver_specific": {} 00:17:22.086 } 00:17:22.086 ] 00:17:22.086 06:09:52 -- common/autotest_common.sh@895 -- # return 0 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:22.086 [2024-06-11 06:09:52.654707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.086 [2024-06-11 06:09:52.656987] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:22.086 [2024-06-11 06:09:52.657042] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:22.086 [2024-06-11 06:09:52.657052] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:22.086 [2024-06-11 06:09:52.657077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.086 06:09:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.345 06:09:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.345 "name": "Existed_Raid", 00:17:22.345 "uuid": "4a2fe6d1-b94d-4fae-9db2-c52a47bf0f41", 00:17:22.345 "strip_size_kb": 64, 00:17:22.345 "state": "configuring", 00:17:22.345 "raid_level": "concat", 00:17:22.345 "superblock": true, 00:17:22.345 "num_base_bdevs": 3, 00:17:22.345 "num_base_bdevs_discovered": 1, 00:17:22.345 "num_base_bdevs_operational": 3, 00:17:22.345 "base_bdevs_list": [ 00:17:22.345 { 00:17:22.345 "name": "BaseBdev1", 00:17:22.345 "uuid": "e58ee8fc-7e11-4684-a6d0-11fc69f5138c", 00:17:22.345 "is_configured": true, 00:17:22.345 "data_offset": 2048, 00:17:22.345 "data_size": 63488 00:17:22.345 }, 00:17:22.345 { 00:17:22.345 "name": "BaseBdev2", 00:17:22.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.345 "is_configured": false, 00:17:22.345 "data_offset": 0, 00:17:22.345 "data_size": 0 00:17:22.345 }, 00:17:22.345 { 00:17:22.345 "name": "BaseBdev3", 00:17:22.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.345 "is_configured": false, 00:17:22.345 "data_offset": 0, 00:17:22.345 "data_size": 0 00:17:22.345 } 00:17:22.345 ] 00:17:22.345 }' 00:17:22.345 06:09:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.345 06:09:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.913 06:09:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:23.172 [2024-06-11 06:09:53.750232] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.172 BaseBdev2 00:17:23.172 06:09:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:23.172 06:09:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:23.172 06:09:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:23.172 06:09:53 -- common/autotest_common.sh@889 -- # local i 00:17:23.172 06:09:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:23.172 06:09:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:23.172 06:09:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.431 06:09:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:23.690 [ 00:17:23.690 { 00:17:23.690 "name": "BaseBdev2", 00:17:23.690 "aliases": [ 00:17:23.690 "da27dd1d-0add-4931-9f3d-f60625642255" 00:17:23.690 ], 00:17:23.690 "product_name": "Malloc disk", 00:17:23.690 "block_size": 512, 00:17:23.690 "num_blocks": 65536, 00:17:23.690 "uuid": "da27dd1d-0add-4931-9f3d-f60625642255", 00:17:23.690 "assigned_rate_limits": { 00:17:23.690 "rw_ios_per_sec": 0, 00:17:23.690 "rw_mbytes_per_sec": 0, 00:17:23.690 "r_mbytes_per_sec": 0, 00:17:23.690 "w_mbytes_per_sec": 0 00:17:23.690 }, 00:17:23.690 "claimed": true, 00:17:23.690 "claim_type": "exclusive_write", 00:17:23.690 "zoned": false, 00:17:23.690 "supported_io_types": { 00:17:23.690 "read": true, 00:17:23.690 "write": true, 00:17:23.690 "unmap": true, 00:17:23.690 "write_zeroes": true, 00:17:23.690 "flush": true, 00:17:23.690 "reset": true, 00:17:23.690 "compare": false, 00:17:23.690 "compare_and_write": false, 00:17:23.690 "abort": true, 00:17:23.690 "nvme_admin": false, 00:17:23.690 "nvme_io": false 00:17:23.690 }, 00:17:23.690 "memory_domains": [ 00:17:23.690 { 00:17:23.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.690 "dma_device_type": 2 00:17:23.690 } 00:17:23.690 ], 00:17:23.690 "driver_specific": {} 00:17:23.690 } 00:17:23.690 ] 00:17:23.690 06:09:54 -- common/autotest_common.sh@895 -- # return 0 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.690 06:09:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.949 06:09:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.949 "name": "Existed_Raid", 00:17:23.949 "uuid": "4a2fe6d1-b94d-4fae-9db2-c52a47bf0f41", 00:17:23.949 "strip_size_kb": 64, 00:17:23.949 "state": "configuring", 00:17:23.949 "raid_level": "concat", 00:17:23.949 "superblock": true, 00:17:23.949 "num_base_bdevs": 3, 00:17:23.949 "num_base_bdevs_discovered": 2, 00:17:23.949 "num_base_bdevs_operational": 3, 00:17:23.949 "base_bdevs_list": [ 00:17:23.949 { 00:17:23.949 "name": "BaseBdev1", 00:17:23.949 "uuid": "e58ee8fc-7e11-4684-a6d0-11fc69f5138c", 00:17:23.949 "is_configured": true, 00:17:23.949 "data_offset": 2048, 00:17:23.949 "data_size": 63488 00:17:23.949 }, 00:17:23.949 { 00:17:23.949 "name": "BaseBdev2", 00:17:23.949 "uuid": "da27dd1d-0add-4931-9f3d-f60625642255", 00:17:23.949 "is_configured": true, 00:17:23.949 "data_offset": 2048, 00:17:23.949 "data_size": 63488 00:17:23.949 }, 00:17:23.949 { 00:17:23.949 "name": "BaseBdev3", 00:17:23.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.949 "is_configured": false, 00:17:23.949 "data_offset": 0, 00:17:23.949 "data_size": 0 00:17:23.949 } 00:17:23.949 ] 00:17:23.949 }' 00:17:23.949 06:09:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.949 06:09:54 -- common/autotest_common.sh@10 -- # set +x 00:17:24.516 06:09:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:24.775 [2024-06-11 06:09:55.287542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:24.775 [2024-06-11 06:09:55.287795] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:24.775 [2024-06-11 06:09:55.287807] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:24.775 [2024-06-11 06:09:55.287973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:24.775 [2024-06-11 06:09:55.288333] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:24.775 [2024-06-11 06:09:55.288360] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:24.775 [2024-06-11 06:09:55.288539] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.775 BaseBdev3 00:17:24.775 06:09:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:24.775 06:09:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:24.775 06:09:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:24.775 06:09:55 -- common/autotest_common.sh@889 -- # local i 00:17:24.775 06:09:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:24.775 06:09:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:24.775 06:09:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.034 06:09:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:25.292 [ 00:17:25.292 { 00:17:25.292 "name": "BaseBdev3", 00:17:25.292 "aliases": [ 00:17:25.292 "06920419-54f3-472d-9890-74e39563df3b" 00:17:25.292 ], 00:17:25.292 "product_name": "Malloc disk", 00:17:25.292 "block_size": 512, 00:17:25.292 "num_blocks": 65536, 00:17:25.292 "uuid": "06920419-54f3-472d-9890-74e39563df3b", 00:17:25.292 "assigned_rate_limits": { 00:17:25.292 "rw_ios_per_sec": 0, 00:17:25.292 "rw_mbytes_per_sec": 0, 00:17:25.292 "r_mbytes_per_sec": 0, 00:17:25.292 "w_mbytes_per_sec": 0 00:17:25.292 }, 00:17:25.292 "claimed": true, 00:17:25.293 "claim_type": "exclusive_write", 00:17:25.293 "zoned": false, 00:17:25.293 "supported_io_types": { 00:17:25.293 "read": true, 00:17:25.293 "write": true, 00:17:25.293 "unmap": true, 00:17:25.293 "write_zeroes": true, 00:17:25.293 "flush": true, 00:17:25.293 "reset": true, 00:17:25.293 "compare": false, 00:17:25.293 "compare_and_write": false, 00:17:25.293 "abort": true, 00:17:25.293 "nvme_admin": false, 00:17:25.293 "nvme_io": false 00:17:25.293 }, 00:17:25.293 "memory_domains": [ 00:17:25.293 { 00:17:25.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.293 "dma_device_type": 2 00:17:25.293 } 00:17:25.293 ], 00:17:25.293 "driver_specific": {} 00:17:25.293 } 00:17:25.293 ] 00:17:25.293 06:09:55 -- common/autotest_common.sh@895 -- # return 0 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.293 06:09:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.551 06:09:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.551 "name": "Existed_Raid", 00:17:25.551 "uuid": "4a2fe6d1-b94d-4fae-9db2-c52a47bf0f41", 00:17:25.551 "strip_size_kb": 64, 00:17:25.551 "state": "online", 00:17:25.551 "raid_level": "concat", 00:17:25.551 "superblock": true, 00:17:25.551 "num_base_bdevs": 3, 00:17:25.551 "num_base_bdevs_discovered": 3, 00:17:25.551 "num_base_bdevs_operational": 3, 00:17:25.551 "base_bdevs_list": [ 00:17:25.551 { 00:17:25.551 "name": "BaseBdev1", 00:17:25.551 "uuid": "e58ee8fc-7e11-4684-a6d0-11fc69f5138c", 00:17:25.551 "is_configured": true, 00:17:25.551 "data_offset": 2048, 00:17:25.551 "data_size": 63488 00:17:25.551 }, 00:17:25.551 { 00:17:25.551 "name": "BaseBdev2", 00:17:25.551 "uuid": "da27dd1d-0add-4931-9f3d-f60625642255", 00:17:25.551 "is_configured": true, 00:17:25.551 "data_offset": 2048, 00:17:25.551 "data_size": 63488 00:17:25.551 }, 00:17:25.551 { 00:17:25.551 "name": "BaseBdev3", 00:17:25.551 "uuid": "06920419-54f3-472d-9890-74e39563df3b", 00:17:25.551 "is_configured": true, 00:17:25.551 "data_offset": 2048, 00:17:25.551 "data_size": 63488 00:17:25.551 } 00:17:25.551 ] 00:17:25.551 }' 00:17:25.551 06:09:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.551 06:09:55 -- common/autotest_common.sh@10 -- # set +x 00:17:26.119 06:09:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.119 [2024-06-11 06:09:56.723930] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.119 [2024-06-11 06:09:56.723970] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.119 [2024-06-11 06:09:56.724039] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.377 06:09:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.636 06:09:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:26.636 "name": "Existed_Raid", 00:17:26.636 "uuid": "4a2fe6d1-b94d-4fae-9db2-c52a47bf0f41", 00:17:26.636 "strip_size_kb": 64, 00:17:26.636 "state": "offline", 00:17:26.636 "raid_level": "concat", 00:17:26.636 "superblock": true, 00:17:26.636 "num_base_bdevs": 3, 00:17:26.636 "num_base_bdevs_discovered": 2, 00:17:26.636 "num_base_bdevs_operational": 2, 00:17:26.636 "base_bdevs_list": [ 00:17:26.636 { 00:17:26.636 "name": null, 00:17:26.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.636 "is_configured": false, 00:17:26.636 "data_offset": 2048, 00:17:26.636 "data_size": 63488 00:17:26.636 }, 00:17:26.636 { 00:17:26.636 "name": "BaseBdev2", 00:17:26.636 "uuid": "da27dd1d-0add-4931-9f3d-f60625642255", 00:17:26.636 "is_configured": true, 00:17:26.636 "data_offset": 2048, 00:17:26.636 "data_size": 63488 00:17:26.636 }, 00:17:26.636 { 00:17:26.636 "name": "BaseBdev3", 00:17:26.636 "uuid": "06920419-54f3-472d-9890-74e39563df3b", 00:17:26.636 "is_configured": true, 00:17:26.636 "data_offset": 2048, 00:17:26.636 "data_size": 63488 00:17:26.636 } 00:17:26.636 ] 00:17:26.636 }' 00:17:26.636 06:09:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:26.636 06:09:57 -- common/autotest_common.sh@10 -- # set +x 00:17:27.203 06:09:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:27.203 06:09:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.203 06:09:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.203 06:09:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.462 06:09:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.462 06:09:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.462 06:09:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:27.721 [2024-06-11 06:09:58.216042] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:27.721 06:09:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.721 06:09:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.721 06:09:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.721 06:09:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.981 06:09:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.981 06:09:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.981 06:09:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:28.240 [2024-06-11 06:09:58.733329] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:28.240 [2024-06-11 06:09:58.733400] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:28.240 06:09:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:28.240 06:09:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:28.240 06:09:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.240 06:09:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:28.499 06:09:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:28.499 06:09:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:28.499 06:09:59 -- bdev/bdev_raid.sh@287 -- # killprocess 117019 00:17:28.499 06:09:59 -- common/autotest_common.sh@926 -- # '[' -z 117019 ']' 00:17:28.499 06:09:59 -- common/autotest_common.sh@930 -- # kill -0 117019 00:17:28.499 06:09:59 -- common/autotest_common.sh@931 -- # uname 00:17:28.499 06:09:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:28.499 06:09:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117019 00:17:28.499 killing process with pid 117019 00:17:28.499 06:09:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:28.499 06:09:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:28.499 06:09:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117019' 00:17:28.499 06:09:59 -- common/autotest_common.sh@945 -- # kill 117019 00:17:28.499 06:09:59 -- common/autotest_common.sh@950 -- # wait 117019 00:17:28.499 [2024-06-11 06:09:59.132268] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.499 [2024-06-11 06:09:59.132437] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.878 ************************************ 00:17:29.878 END TEST raid_state_function_test_sb 00:17:29.878 ************************************ 00:17:29.878 06:10:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:29.878 00:17:29.878 real 0m12.877s 00:17:29.878 user 0m21.691s 00:17:29.878 sys 0m2.049s 00:17:29.878 06:10:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:29.878 06:10:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.137 06:10:00 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:30.138 06:10:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:30.138 06:10:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:30.138 06:10:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 ************************************ 00:17:30.138 START TEST raid_superblock_test 00:17:30.138 ************************************ 00:17:30.138 06:10:00 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@357 -- # raid_pid=117413 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117413 /var/tmp/spdk-raid.sock 00:17:30.138 06:10:00 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:30.138 06:10:00 -- common/autotest_common.sh@819 -- # '[' -z 117413 ']' 00:17:30.138 06:10:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.138 06:10:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.138 06:10:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.138 06:10:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.138 06:10:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 [2024-06-11 06:10:00.659711] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:30.138 [2024-06-11 06:10:00.660785] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117413 ] 00:17:30.397 [2024-06-11 06:10:00.844906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.656 [2024-06-11 06:10:01.077412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.916 [2024-06-11 06:10:01.315377] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.175 06:10:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.175 06:10:01 -- common/autotest_common.sh@852 -- # return 0 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:31.175 malloc1 00:17:31.175 06:10:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:31.461 [2024-06-11 06:10:02.011145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:31.461 [2024-06-11 06:10:02.011445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.461 [2024-06-11 06:10:02.011522] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:31.461 [2024-06-11 06:10:02.011782] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.461 [2024-06-11 06:10:02.014547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.461 [2024-06-11 06:10:02.014720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:31.461 pt1 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.461 06:10:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:31.721 malloc2 00:17:31.721 06:10:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:31.979 [2024-06-11 06:10:02.502684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:31.979 [2024-06-11 06:10:02.502969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.979 [2024-06-11 06:10:02.503061] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:31.979 [2024-06-11 06:10:02.503219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.979 [2024-06-11 06:10:02.505813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.979 [2024-06-11 06:10:02.505992] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:31.979 pt2 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.979 06:10:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:32.238 malloc3 00:17:32.239 06:10:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:32.498 [2024-06-11 06:10:02.942304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:32.498 [2024-06-11 06:10:02.942541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.498 [2024-06-11 06:10:02.942626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:32.498 [2024-06-11 06:10:02.942769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.498 [2024-06-11 06:10:02.945449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.498 [2024-06-11 06:10:02.945608] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:32.498 pt3 00:17:32.498 06:10:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:32.498 06:10:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:32.498 06:10:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:32.498 [2024-06-11 06:10:03.106511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:32.498 [2024-06-11 06:10:03.108974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.498 [2024-06-11 06:10:03.109163] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:32.498 [2024-06-11 06:10:03.109404] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:32.498 [2024-06-11 06:10:03.109524] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:32.498 [2024-06-11 06:10:03.109698] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:32.498 [2024-06-11 06:10:03.110258] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:32.498 [2024-06-11 06:10:03.110357] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:32.498 [2024-06-11 06:10:03.110634] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.498 06:10:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.758 06:10:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.758 "name": "raid_bdev1", 00:17:32.758 "uuid": "9af66098-6efc-4428-950a-4a37e70344fd", 00:17:32.758 "strip_size_kb": 64, 00:17:32.758 "state": "online", 00:17:32.758 "raid_level": "concat", 00:17:32.758 "superblock": true, 00:17:32.758 "num_base_bdevs": 3, 00:17:32.758 "num_base_bdevs_discovered": 3, 00:17:32.758 "num_base_bdevs_operational": 3, 00:17:32.758 "base_bdevs_list": [ 00:17:32.758 { 00:17:32.758 "name": "pt1", 00:17:32.758 "uuid": "97f70eef-f9da-5337-af25-3becc8b08df8", 00:17:32.758 "is_configured": true, 00:17:32.758 "data_offset": 2048, 00:17:32.758 "data_size": 63488 00:17:32.758 }, 00:17:32.758 { 00:17:32.758 "name": "pt2", 00:17:32.758 "uuid": "12f7ca62-92e7-5558-a2b7-e5ef2a4691ba", 00:17:32.758 "is_configured": true, 00:17:32.758 "data_offset": 2048, 00:17:32.758 "data_size": 63488 00:17:32.758 }, 00:17:32.758 { 00:17:32.758 "name": "pt3", 00:17:32.758 "uuid": "88ae83fc-792b-5645-9b50-92d827bcd828", 00:17:32.758 "is_configured": true, 00:17:32.758 "data_offset": 2048, 00:17:32.758 "data_size": 63488 00:17:32.758 } 00:17:32.758 ] 00:17:32.758 }' 00:17:32.758 06:10:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.758 06:10:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.326 06:10:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:33.326 06:10:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:33.585 [2024-06-11 06:10:04.106982] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:33.585 06:10:04 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9af66098-6efc-4428-950a-4a37e70344fd 00:17:33.585 06:10:04 -- bdev/bdev_raid.sh@380 -- # '[' -z 9af66098-6efc-4428-950a-4a37e70344fd ']' 00:17:33.585 06:10:04 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:33.844 [2024-06-11 06:10:04.278811] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.844 [2024-06-11 06:10:04.278979] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.844 [2024-06-11 06:10:04.279199] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.844 [2024-06-11 06:10:04.279356] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.844 [2024-06-11 06:10:04.279432] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:33.844 06:10:04 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.844 06:10:04 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:34.103 06:10:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:34.103 06:10:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:34.103 06:10:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.103 06:10:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:34.363 06:10:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.363 06:10:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:34.622 06:10:05 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:34.622 06:10:05 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:34.622 06:10:05 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:34.622 06:10:05 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.881 06:10:05 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:34.881 06:10:05 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.881 06:10:05 -- common/autotest_common.sh@640 -- # local es=0 00:17:34.881 06:10:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:34.881 06:10:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.881 06:10:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.881 06:10:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.881 06:10:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.881 06:10:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.881 06:10:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:34.881 06:10:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.881 06:10:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.881 06:10:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:35.140 [2024-06-11 06:10:05.615016] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:35.140 [2024-06-11 06:10:05.617436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:35.140 [2024-06-11 06:10:05.617634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:35.140 [2024-06-11 06:10:05.617722] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:35.140 [2024-06-11 06:10:05.617905] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:35.140 [2024-06-11 06:10:05.618060] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:35.140 [2024-06-11 06:10:05.618143] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.140 [2024-06-11 06:10:05.618254] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:35.140 request: 00:17:35.140 { 00:17:35.140 "name": "raid_bdev1", 00:17:35.140 "raid_level": "concat", 00:17:35.140 "base_bdevs": [ 00:17:35.140 "malloc1", 00:17:35.140 "malloc2", 00:17:35.140 "malloc3" 00:17:35.140 ], 00:17:35.140 "superblock": false, 00:17:35.140 "strip_size_kb": 64, 00:17:35.140 "method": "bdev_raid_create", 00:17:35.140 "req_id": 1 00:17:35.140 } 00:17:35.140 Got JSON-RPC error response 00:17:35.140 response: 00:17:35.140 { 00:17:35.140 "code": -17, 00:17:35.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:35.140 } 00:17:35.140 06:10:05 -- common/autotest_common.sh@643 -- # es=1 00:17:35.140 06:10:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:35.141 06:10:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:35.141 06:10:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:35.141 06:10:05 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.141 06:10:05 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.400 [2024-06-11 06:10:05.967004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.400 [2024-06-11 06:10:05.967293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.400 [2024-06-11 06:10:05.967368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:35.400 [2024-06-11 06:10:05.967449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.400 [2024-06-11 06:10:05.970166] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.400 [2024-06-11 06:10:05.970323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.400 [2024-06-11 06:10:05.970548] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:35.400 [2024-06-11 06:10:05.970718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.400 pt1 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:35.400 06:10:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.401 06:10:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.660 06:10:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.660 "name": "raid_bdev1", 00:17:35.660 "uuid": "9af66098-6efc-4428-950a-4a37e70344fd", 00:17:35.660 "strip_size_kb": 64, 00:17:35.660 "state": "configuring", 00:17:35.660 "raid_level": "concat", 00:17:35.660 "superblock": true, 00:17:35.660 "num_base_bdevs": 3, 00:17:35.660 "num_base_bdevs_discovered": 1, 00:17:35.660 "num_base_bdevs_operational": 3, 00:17:35.660 "base_bdevs_list": [ 00:17:35.660 { 00:17:35.660 "name": "pt1", 00:17:35.660 "uuid": "97f70eef-f9da-5337-af25-3becc8b08df8", 00:17:35.660 "is_configured": true, 00:17:35.660 "data_offset": 2048, 00:17:35.660 "data_size": 63488 00:17:35.660 }, 00:17:35.660 { 00:17:35.660 "name": null, 00:17:35.660 "uuid": "12f7ca62-92e7-5558-a2b7-e5ef2a4691ba", 00:17:35.660 "is_configured": false, 00:17:35.660 "data_offset": 2048, 00:17:35.660 "data_size": 63488 00:17:35.660 }, 00:17:35.660 { 00:17:35.660 "name": null, 00:17:35.660 "uuid": "88ae83fc-792b-5645-9b50-92d827bcd828", 00:17:35.660 "is_configured": false, 00:17:35.660 "data_offset": 2048, 00:17:35.660 "data_size": 63488 00:17:35.660 } 00:17:35.660 ] 00:17:35.660 }' 00:17:35.660 06:10:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.660 06:10:06 -- common/autotest_common.sh@10 -- # set +x 00:17:36.228 06:10:06 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:36.228 06:10:06 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.228 [2024-06-11 06:10:06.795143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.228 [2024-06-11 06:10:06.795435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.228 [2024-06-11 06:10:06.795521] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:36.228 [2024-06-11 06:10:06.795611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.228 [2024-06-11 06:10:06.796155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.229 [2024-06-11 06:10:06.796297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.229 [2024-06-11 06:10:06.796515] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:36.229 [2024-06-11 06:10:06.796613] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.229 pt2 00:17:36.229 06:10:06 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:36.488 [2024-06-11 06:10:06.967221] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.488 06:10:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.747 06:10:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.747 "name": "raid_bdev1", 00:17:36.747 "uuid": "9af66098-6efc-4428-950a-4a37e70344fd", 00:17:36.747 "strip_size_kb": 64, 00:17:36.747 "state": "configuring", 00:17:36.747 "raid_level": "concat", 00:17:36.747 "superblock": true, 00:17:36.747 "num_base_bdevs": 3, 00:17:36.747 "num_base_bdevs_discovered": 1, 00:17:36.747 "num_base_bdevs_operational": 3, 00:17:36.747 "base_bdevs_list": [ 00:17:36.747 { 00:17:36.747 "name": "pt1", 00:17:36.747 "uuid": "97f70eef-f9da-5337-af25-3becc8b08df8", 00:17:36.747 "is_configured": true, 00:17:36.747 "data_offset": 2048, 00:17:36.747 "data_size": 63488 00:17:36.747 }, 00:17:36.747 { 00:17:36.747 "name": null, 00:17:36.747 "uuid": "12f7ca62-92e7-5558-a2b7-e5ef2a4691ba", 00:17:36.747 "is_configured": false, 00:17:36.747 "data_offset": 2048, 00:17:36.747 "data_size": 63488 00:17:36.747 }, 00:17:36.747 { 00:17:36.747 "name": null, 00:17:36.747 "uuid": "88ae83fc-792b-5645-9b50-92d827bcd828", 00:17:36.747 "is_configured": false, 00:17:36.747 "data_offset": 2048, 00:17:36.747 "data_size": 63488 00:17:36.747 } 00:17:36.747 ] 00:17:36.747 }' 00:17:36.747 06:10:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.747 06:10:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.315 06:10:07 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:37.315 06:10:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.315 06:10:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.315 [2024-06-11 06:10:07.839288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.315 [2024-06-11 06:10:07.839588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.315 [2024-06-11 06:10:07.839663] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:37.315 [2024-06-11 06:10:07.839768] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.315 [2024-06-11 06:10:07.840376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.315 [2024-06-11 06:10:07.840516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.315 [2024-06-11 06:10:07.840726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:37.315 [2024-06-11 06:10:07.840842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.315 pt2 00:17:37.315 06:10:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.315 06:10:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.316 06:10:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.575 [2024-06-11 06:10:08.067361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.575 [2024-06-11 06:10:08.067604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.575 [2024-06-11 06:10:08.067689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:37.575 [2024-06-11 06:10:08.067816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.575 [2024-06-11 06:10:08.068338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.575 [2024-06-11 06:10:08.068481] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.575 [2024-06-11 06:10:08.068697] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:37.575 [2024-06-11 06:10:08.068822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.575 [2024-06-11 06:10:08.068996] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:37.575 [2024-06-11 06:10:08.069139] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:37.575 [2024-06-11 06:10:08.069290] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:37.575 [2024-06-11 06:10:08.069785] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:37.575 [2024-06-11 06:10:08.069889] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:37.575 [2024-06-11 06:10:08.070099] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.575 pt3 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.575 06:10:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.834 06:10:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.834 "name": "raid_bdev1", 00:17:37.834 "uuid": "9af66098-6efc-4428-950a-4a37e70344fd", 00:17:37.834 "strip_size_kb": 64, 00:17:37.834 "state": "online", 00:17:37.834 "raid_level": "concat", 00:17:37.834 "superblock": true, 00:17:37.834 "num_base_bdevs": 3, 00:17:37.834 "num_base_bdevs_discovered": 3, 00:17:37.834 "num_base_bdevs_operational": 3, 00:17:37.834 "base_bdevs_list": [ 00:17:37.834 { 00:17:37.834 "name": "pt1", 00:17:37.834 "uuid": "97f70eef-f9da-5337-af25-3becc8b08df8", 00:17:37.834 "is_configured": true, 00:17:37.834 "data_offset": 2048, 00:17:37.834 "data_size": 63488 00:17:37.834 }, 00:17:37.834 { 00:17:37.834 "name": "pt2", 00:17:37.834 "uuid": "12f7ca62-92e7-5558-a2b7-e5ef2a4691ba", 00:17:37.834 "is_configured": true, 00:17:37.834 "data_offset": 2048, 00:17:37.834 "data_size": 63488 00:17:37.834 }, 00:17:37.834 { 00:17:37.834 "name": "pt3", 00:17:37.834 "uuid": "88ae83fc-792b-5645-9b50-92d827bcd828", 00:17:37.834 "is_configured": true, 00:17:37.834 "data_offset": 2048, 00:17:37.834 "data_size": 63488 00:17:37.834 } 00:17:37.834 ] 00:17:37.834 }' 00:17:37.834 06:10:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.834 06:10:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.093 06:10:08 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.093 06:10:08 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:38.352 [2024-06-11 06:10:08.951684] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.352 06:10:08 -- bdev/bdev_raid.sh@430 -- # '[' 9af66098-6efc-4428-950a-4a37e70344fd '!=' 9af66098-6efc-4428-950a-4a37e70344fd ']' 00:17:38.352 06:10:08 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:38.352 06:10:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:38.352 06:10:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:38.352 06:10:08 -- bdev/bdev_raid.sh@511 -- # killprocess 117413 00:17:38.352 06:10:08 -- common/autotest_common.sh@926 -- # '[' -z 117413 ']' 00:17:38.352 06:10:08 -- common/autotest_common.sh@930 -- # kill -0 117413 00:17:38.352 06:10:08 -- common/autotest_common.sh@931 -- # uname 00:17:38.352 06:10:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.352 06:10:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117413 00:17:38.612 killing process with pid 117413 00:17:38.612 06:10:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.612 06:10:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.612 06:10:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117413' 00:17:38.612 06:10:08 -- common/autotest_common.sh@945 -- # kill 117413 00:17:38.612 [2024-06-11 06:10:09.003240] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.612 06:10:08 -- common/autotest_common.sh@950 -- # wait 117413 00:17:38.612 [2024-06-11 06:10:09.003329] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.612 [2024-06-11 06:10:09.003390] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.612 [2024-06-11 06:10:09.003398] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:38.872 [2024-06-11 06:10:09.308703] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:40.252 00:17:40.252 real 0m10.095s 00:17:40.252 user 0m16.478s 00:17:40.252 sys 0m1.687s 00:17:40.252 06:10:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.252 ************************************ 00:17:40.252 END TEST raid_superblock_test 00:17:40.252 ************************************ 00:17:40.252 06:10:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:40.252 06:10:10 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:40.252 06:10:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.252 06:10:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.252 ************************************ 00:17:40.252 START TEST raid_state_function_test 00:17:40.252 ************************************ 00:17:40.252 06:10:10 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:40.252 06:10:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=117712 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117712' 00:17:40.253 Process raid pid: 117712 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117712 /var/tmp/spdk-raid.sock 00:17:40.253 06:10:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:40.253 06:10:10 -- common/autotest_common.sh@819 -- # '[' -z 117712 ']' 00:17:40.253 06:10:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:40.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:40.253 06:10:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:40.253 06:10:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:40.253 06:10:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:40.253 06:10:10 -- common/autotest_common.sh@10 -- # set +x 00:17:40.253 [2024-06-11 06:10:10.843174] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:40.253 [2024-06-11 06:10:10.844286] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.512 [2024-06-11 06:10:11.028853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.771 [2024-06-11 06:10:11.266168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.030 [2024-06-11 06:10:11.514716] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.290 06:10:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:41.290 06:10:11 -- common/autotest_common.sh@852 -- # return 0 00:17:41.290 06:10:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:41.549 [2024-06-11 06:10:11.987914] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.549 [2024-06-11 06:10:11.988647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.549 [2024-06-11 06:10:11.988783] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.549 [2024-06-11 06:10:11.988968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.549 [2024-06-11 06:10:11.989152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.549 [2024-06-11 06:10:11.989346] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:41.549 06:10:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:41.549 06:10:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:41.549 06:10:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:41.549 06:10:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.549 06:10:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.808 06:10:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:41.808 "name": "Existed_Raid", 00:17:41.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.808 "strip_size_kb": 0, 00:17:41.808 "state": "configuring", 00:17:41.808 "raid_level": "raid1", 00:17:41.808 "superblock": false, 00:17:41.808 "num_base_bdevs": 3, 00:17:41.808 "num_base_bdevs_discovered": 0, 00:17:41.808 "num_base_bdevs_operational": 3, 00:17:41.808 "base_bdevs_list": [ 00:17:41.808 { 00:17:41.808 "name": "BaseBdev1", 00:17:41.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.808 "is_configured": false, 00:17:41.808 "data_offset": 0, 00:17:41.808 "data_size": 0 00:17:41.808 }, 00:17:41.808 { 00:17:41.808 "name": "BaseBdev2", 00:17:41.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.808 "is_configured": false, 00:17:41.808 "data_offset": 0, 00:17:41.808 "data_size": 0 00:17:41.808 }, 00:17:41.808 { 00:17:41.808 "name": "BaseBdev3", 00:17:41.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.808 "is_configured": false, 00:17:41.808 "data_offset": 0, 00:17:41.808 "data_size": 0 00:17:41.808 } 00:17:41.808 ] 00:17:41.808 }' 00:17:41.808 06:10:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:41.808 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:42.376 06:10:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.376 [2024-06-11 06:10:12.979954] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.376 [2024-06-11 06:10:12.980178] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:42.376 06:10:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:42.636 [2024-06-11 06:10:13.216051] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:42.636 [2024-06-11 06:10:13.216715] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:42.636 [2024-06-11 06:10:13.216872] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.636 [2024-06-11 06:10:13.217047] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.636 [2024-06-11 06:10:13.217160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.636 [2024-06-11 06:10:13.217314] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.636 06:10:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.895 [2024-06-11 06:10:13.432769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.895 BaseBdev1 00:17:42.895 06:10:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:42.895 06:10:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:42.895 06:10:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:42.895 06:10:13 -- common/autotest_common.sh@889 -- # local i 00:17:42.895 06:10:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:42.895 06:10:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:42.895 06:10:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:43.154 06:10:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:43.413 [ 00:17:43.413 { 00:17:43.413 "name": "BaseBdev1", 00:17:43.413 "aliases": [ 00:17:43.413 "151b6016-4997-46b8-8524-201e6c5cf757" 00:17:43.413 ], 00:17:43.413 "product_name": "Malloc disk", 00:17:43.413 "block_size": 512, 00:17:43.413 "num_blocks": 65536, 00:17:43.413 "uuid": "151b6016-4997-46b8-8524-201e6c5cf757", 00:17:43.413 "assigned_rate_limits": { 00:17:43.413 "rw_ios_per_sec": 0, 00:17:43.413 "rw_mbytes_per_sec": 0, 00:17:43.413 "r_mbytes_per_sec": 0, 00:17:43.413 "w_mbytes_per_sec": 0 00:17:43.413 }, 00:17:43.413 "claimed": true, 00:17:43.413 "claim_type": "exclusive_write", 00:17:43.413 "zoned": false, 00:17:43.413 "supported_io_types": { 00:17:43.413 "read": true, 00:17:43.413 "write": true, 00:17:43.413 "unmap": true, 00:17:43.413 "write_zeroes": true, 00:17:43.413 "flush": true, 00:17:43.413 "reset": true, 00:17:43.413 "compare": false, 00:17:43.413 "compare_and_write": false, 00:17:43.413 "abort": true, 00:17:43.413 "nvme_admin": false, 00:17:43.413 "nvme_io": false 00:17:43.413 }, 00:17:43.413 "memory_domains": [ 00:17:43.413 { 00:17:43.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.413 "dma_device_type": 2 00:17:43.413 } 00:17:43.413 ], 00:17:43.413 "driver_specific": {} 00:17:43.413 } 00:17:43.413 ] 00:17:43.413 06:10:13 -- common/autotest_common.sh@895 -- # return 0 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.413 06:10:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.673 06:10:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.673 "name": "Existed_Raid", 00:17:43.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.673 "strip_size_kb": 0, 00:17:43.673 "state": "configuring", 00:17:43.673 "raid_level": "raid1", 00:17:43.673 "superblock": false, 00:17:43.673 "num_base_bdevs": 3, 00:17:43.673 "num_base_bdevs_discovered": 1, 00:17:43.673 "num_base_bdevs_operational": 3, 00:17:43.673 "base_bdevs_list": [ 00:17:43.673 { 00:17:43.673 "name": "BaseBdev1", 00:17:43.673 "uuid": "151b6016-4997-46b8-8524-201e6c5cf757", 00:17:43.673 "is_configured": true, 00:17:43.673 "data_offset": 0, 00:17:43.673 "data_size": 65536 00:17:43.673 }, 00:17:43.673 { 00:17:43.673 "name": "BaseBdev2", 00:17:43.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.673 "is_configured": false, 00:17:43.673 "data_offset": 0, 00:17:43.673 "data_size": 0 00:17:43.673 }, 00:17:43.673 { 00:17:43.673 "name": "BaseBdev3", 00:17:43.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.673 "is_configured": false, 00:17:43.673 "data_offset": 0, 00:17:43.673 "data_size": 0 00:17:43.673 } 00:17:43.673 ] 00:17:43.673 }' 00:17:43.673 06:10:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.673 06:10:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.241 06:10:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:44.241 [2024-06-11 06:10:14.845073] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:44.241 [2024-06-11 06:10:14.845334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:44.241 06:10:14 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:44.241 06:10:14 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:44.499 [2024-06-11 06:10:15.081198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.499 [2024-06-11 06:10:15.083643] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.499 [2024-06-11 06:10:15.084577] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.499 [2024-06-11 06:10:15.084708] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.499 [2024-06-11 06:10:15.084866] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.499 06:10:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.758 06:10:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.758 "name": "Existed_Raid", 00:17:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.758 "strip_size_kb": 0, 00:17:44.758 "state": "configuring", 00:17:44.758 "raid_level": "raid1", 00:17:44.758 "superblock": false, 00:17:44.758 "num_base_bdevs": 3, 00:17:44.758 "num_base_bdevs_discovered": 1, 00:17:44.758 "num_base_bdevs_operational": 3, 00:17:44.758 "base_bdevs_list": [ 00:17:44.758 { 00:17:44.758 "name": "BaseBdev1", 00:17:44.758 "uuid": "151b6016-4997-46b8-8524-201e6c5cf757", 00:17:44.758 "is_configured": true, 00:17:44.758 "data_offset": 0, 00:17:44.758 "data_size": 65536 00:17:44.758 }, 00:17:44.758 { 00:17:44.758 "name": "BaseBdev2", 00:17:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.758 "is_configured": false, 00:17:44.758 "data_offset": 0, 00:17:44.758 "data_size": 0 00:17:44.758 }, 00:17:44.758 { 00:17:44.758 "name": "BaseBdev3", 00:17:44.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.758 "is_configured": false, 00:17:44.758 "data_offset": 0, 00:17:44.758 "data_size": 0 00:17:44.758 } 00:17:44.758 ] 00:17:44.758 }' 00:17:44.758 06:10:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.758 06:10:15 -- common/autotest_common.sh@10 -- # set +x 00:17:45.326 06:10:15 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.585 [2024-06-11 06:10:16.108258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.585 BaseBdev2 00:17:45.585 06:10:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:45.585 06:10:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:45.585 06:10:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.585 06:10:16 -- common/autotest_common.sh@889 -- # local i 00:17:45.585 06:10:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.585 06:10:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.585 06:10:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.845 06:10:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.103 [ 00:17:46.103 { 00:17:46.103 "name": "BaseBdev2", 00:17:46.103 "aliases": [ 00:17:46.103 "295783a6-5a73-4bf2-ad31-dbe8689893b8" 00:17:46.103 ], 00:17:46.103 "product_name": "Malloc disk", 00:17:46.103 "block_size": 512, 00:17:46.103 "num_blocks": 65536, 00:17:46.103 "uuid": "295783a6-5a73-4bf2-ad31-dbe8689893b8", 00:17:46.103 "assigned_rate_limits": { 00:17:46.103 "rw_ios_per_sec": 0, 00:17:46.104 "rw_mbytes_per_sec": 0, 00:17:46.104 "r_mbytes_per_sec": 0, 00:17:46.104 "w_mbytes_per_sec": 0 00:17:46.104 }, 00:17:46.104 "claimed": true, 00:17:46.104 "claim_type": "exclusive_write", 00:17:46.104 "zoned": false, 00:17:46.104 "supported_io_types": { 00:17:46.104 "read": true, 00:17:46.104 "write": true, 00:17:46.104 "unmap": true, 00:17:46.104 "write_zeroes": true, 00:17:46.104 "flush": true, 00:17:46.104 "reset": true, 00:17:46.104 "compare": false, 00:17:46.104 "compare_and_write": false, 00:17:46.104 "abort": true, 00:17:46.104 "nvme_admin": false, 00:17:46.104 "nvme_io": false 00:17:46.104 }, 00:17:46.104 "memory_domains": [ 00:17:46.104 { 00:17:46.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.104 "dma_device_type": 2 00:17:46.104 } 00:17:46.104 ], 00:17:46.104 "driver_specific": {} 00:17:46.104 } 00:17:46.104 ] 00:17:46.104 06:10:16 -- common/autotest_common.sh@895 -- # return 0 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.104 "name": "Existed_Raid", 00:17:46.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.104 "strip_size_kb": 0, 00:17:46.104 "state": "configuring", 00:17:46.104 "raid_level": "raid1", 00:17:46.104 "superblock": false, 00:17:46.104 "num_base_bdevs": 3, 00:17:46.104 "num_base_bdevs_discovered": 2, 00:17:46.104 "num_base_bdevs_operational": 3, 00:17:46.104 "base_bdevs_list": [ 00:17:46.104 { 00:17:46.104 "name": "BaseBdev1", 00:17:46.104 "uuid": "151b6016-4997-46b8-8524-201e6c5cf757", 00:17:46.104 "is_configured": true, 00:17:46.104 "data_offset": 0, 00:17:46.104 "data_size": 65536 00:17:46.104 }, 00:17:46.104 { 00:17:46.104 "name": "BaseBdev2", 00:17:46.104 "uuid": "295783a6-5a73-4bf2-ad31-dbe8689893b8", 00:17:46.104 "is_configured": true, 00:17:46.104 "data_offset": 0, 00:17:46.104 "data_size": 65536 00:17:46.104 }, 00:17:46.104 { 00:17:46.104 "name": "BaseBdev3", 00:17:46.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.104 "is_configured": false, 00:17:46.104 "data_offset": 0, 00:17:46.104 "data_size": 0 00:17:46.104 } 00:17:46.104 ] 00:17:46.104 }' 00:17:46.104 06:10:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.104 06:10:16 -- common/autotest_common.sh@10 -- # set +x 00:17:47.041 06:10:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:47.041 [2024-06-11 06:10:17.596435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.041 [2024-06-11 06:10:17.596727] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:47.041 [2024-06-11 06:10:17.596769] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:47.041 [2024-06-11 06:10:17.597002] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:47.041 [2024-06-11 06:10:17.597516] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:47.041 [2024-06-11 06:10:17.597635] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:47.041 [2024-06-11 06:10:17.597983] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.041 BaseBdev3 00:17:47.041 06:10:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:47.041 06:10:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:47.041 06:10:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:47.041 06:10:17 -- common/autotest_common.sh@889 -- # local i 00:17:47.041 06:10:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:47.041 06:10:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:47.042 06:10:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.300 06:10:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.561 [ 00:17:47.561 { 00:17:47.561 "name": "BaseBdev3", 00:17:47.561 "aliases": [ 00:17:47.561 "372dc9d7-6a56-4d7b-88d3-49f828e039e6" 00:17:47.561 ], 00:17:47.561 "product_name": "Malloc disk", 00:17:47.561 "block_size": 512, 00:17:47.561 "num_blocks": 65536, 00:17:47.561 "uuid": "372dc9d7-6a56-4d7b-88d3-49f828e039e6", 00:17:47.561 "assigned_rate_limits": { 00:17:47.561 "rw_ios_per_sec": 0, 00:17:47.561 "rw_mbytes_per_sec": 0, 00:17:47.561 "r_mbytes_per_sec": 0, 00:17:47.561 "w_mbytes_per_sec": 0 00:17:47.561 }, 00:17:47.561 "claimed": true, 00:17:47.561 "claim_type": "exclusive_write", 00:17:47.561 "zoned": false, 00:17:47.561 "supported_io_types": { 00:17:47.561 "read": true, 00:17:47.561 "write": true, 00:17:47.561 "unmap": true, 00:17:47.561 "write_zeroes": true, 00:17:47.561 "flush": true, 00:17:47.561 "reset": true, 00:17:47.561 "compare": false, 00:17:47.561 "compare_and_write": false, 00:17:47.561 "abort": true, 00:17:47.561 "nvme_admin": false, 00:17:47.561 "nvme_io": false 00:17:47.561 }, 00:17:47.561 "memory_domains": [ 00:17:47.561 { 00:17:47.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.561 "dma_device_type": 2 00:17:47.561 } 00:17:47.561 ], 00:17:47.561 "driver_specific": {} 00:17:47.561 } 00:17:47.561 ] 00:17:47.561 06:10:17 -- common/autotest_common.sh@895 -- # return 0 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.561 06:10:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.839 06:10:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.839 "name": "Existed_Raid", 00:17:47.839 "uuid": "53929056-a0a4-465f-bc4d-b56854001591", 00:17:47.839 "strip_size_kb": 0, 00:17:47.839 "state": "online", 00:17:47.839 "raid_level": "raid1", 00:17:47.839 "superblock": false, 00:17:47.839 "num_base_bdevs": 3, 00:17:47.839 "num_base_bdevs_discovered": 3, 00:17:47.839 "num_base_bdevs_operational": 3, 00:17:47.839 "base_bdevs_list": [ 00:17:47.839 { 00:17:47.839 "name": "BaseBdev1", 00:17:47.839 "uuid": "151b6016-4997-46b8-8524-201e6c5cf757", 00:17:47.839 "is_configured": true, 00:17:47.839 "data_offset": 0, 00:17:47.839 "data_size": 65536 00:17:47.839 }, 00:17:47.839 { 00:17:47.839 "name": "BaseBdev2", 00:17:47.839 "uuid": "295783a6-5a73-4bf2-ad31-dbe8689893b8", 00:17:47.839 "is_configured": true, 00:17:47.839 "data_offset": 0, 00:17:47.839 "data_size": 65536 00:17:47.839 }, 00:17:47.839 { 00:17:47.839 "name": "BaseBdev3", 00:17:47.839 "uuid": "372dc9d7-6a56-4d7b-88d3-49f828e039e6", 00:17:47.839 "is_configured": true, 00:17:47.839 "data_offset": 0, 00:17:47.839 "data_size": 65536 00:17:47.839 } 00:17:47.839 ] 00:17:47.839 }' 00:17:47.839 06:10:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.839 06:10:18 -- common/autotest_common.sh@10 -- # set +x 00:17:48.419 06:10:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.419 [2024-06-11 06:10:18.956797] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.678 06:10:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.937 06:10:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.937 "name": "Existed_Raid", 00:17:48.937 "uuid": "53929056-a0a4-465f-bc4d-b56854001591", 00:17:48.937 "strip_size_kb": 0, 00:17:48.937 "state": "online", 00:17:48.937 "raid_level": "raid1", 00:17:48.937 "superblock": false, 00:17:48.937 "num_base_bdevs": 3, 00:17:48.937 "num_base_bdevs_discovered": 2, 00:17:48.937 "num_base_bdevs_operational": 2, 00:17:48.937 "base_bdevs_list": [ 00:17:48.937 { 00:17:48.937 "name": null, 00:17:48.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.937 "is_configured": false, 00:17:48.937 "data_offset": 0, 00:17:48.937 "data_size": 65536 00:17:48.937 }, 00:17:48.937 { 00:17:48.937 "name": "BaseBdev2", 00:17:48.937 "uuid": "295783a6-5a73-4bf2-ad31-dbe8689893b8", 00:17:48.937 "is_configured": true, 00:17:48.937 "data_offset": 0, 00:17:48.937 "data_size": 65536 00:17:48.937 }, 00:17:48.937 { 00:17:48.937 "name": "BaseBdev3", 00:17:48.937 "uuid": "372dc9d7-6a56-4d7b-88d3-49f828e039e6", 00:17:48.937 "is_configured": true, 00:17:48.937 "data_offset": 0, 00:17:48.937 "data_size": 65536 00:17:48.937 } 00:17:48.937 ] 00:17:48.937 }' 00:17:48.937 06:10:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.937 06:10:19 -- common/autotest_common.sh@10 -- # set +x 00:17:49.505 06:10:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.505 06:10:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.505 06:10:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.505 06:10:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.763 06:10:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.763 06:10:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.763 06:10:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.022 [2024-06-11 06:10:20.417255] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.022 06:10:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.022 06:10:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.022 06:10:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.022 06:10:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.280 06:10:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.280 06:10:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.280 06:10:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.538 [2024-06-11 06:10:20.962349] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.538 [2024-06-11 06:10:20.962598] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.538 [2024-06-11 06:10:20.962765] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.538 [2024-06-11 06:10:21.065134] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.538 [2024-06-11 06:10:21.065382] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:50.538 06:10:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.538 06:10:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.538 06:10:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.538 06:10:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.797 06:10:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:50.797 06:10:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:50.797 06:10:21 -- bdev/bdev_raid.sh@287 -- # killprocess 117712 00:17:50.797 06:10:21 -- common/autotest_common.sh@926 -- # '[' -z 117712 ']' 00:17:50.797 06:10:21 -- common/autotest_common.sh@930 -- # kill -0 117712 00:17:50.797 06:10:21 -- common/autotest_common.sh@931 -- # uname 00:17:50.797 06:10:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.797 06:10:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117712 00:17:50.797 06:10:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.797 06:10:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.797 06:10:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117712' 00:17:50.797 killing process with pid 117712 00:17:50.797 06:10:21 -- common/autotest_common.sh@945 -- # kill 117712 00:17:50.797 [2024-06-11 06:10:21.362167] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.797 06:10:21 -- common/autotest_common.sh@950 -- # wait 117712 00:17:50.797 [2024-06-11 06:10:21.362420] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.173 06:10:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:52.173 00:17:52.173 real 0m11.976s 00:17:52.173 user 0m19.790s 00:17:52.173 sys 0m2.205s 00:17:52.173 ************************************ 00:17:52.173 END TEST raid_state_function_test 00:17:52.173 ************************************ 00:17:52.173 06:10:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:52.173 06:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.173 06:10:22 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:52.173 06:10:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:52.173 06:10:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:52.173 06:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.173 ************************************ 00:17:52.173 START TEST raid_state_function_test_sb 00:17:52.173 ************************************ 00:17:52.173 06:10:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:52.173 06:10:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:52.173 06:10:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=118094 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118094' 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:52.174 Process raid pid: 118094 00:17:52.174 06:10:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118094 /var/tmp/spdk-raid.sock 00:17:52.174 06:10:22 -- common/autotest_common.sh@819 -- # '[' -z 118094 ']' 00:17:52.174 06:10:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:52.174 06:10:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:52.174 06:10:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:52.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:52.174 06:10:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:52.174 06:10:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.433 [2024-06-11 06:10:22.884321] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:17:52.433 [2024-06-11 06:10:22.884785] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.433 [2024-06-11 06:10:23.073558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.000 [2024-06-11 06:10:23.371525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.000 [2024-06-11 06:10:23.619962] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.259 06:10:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:53.259 06:10:23 -- common/autotest_common.sh@852 -- # return 0 00:17:53.259 06:10:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:53.518 [2024-06-11 06:10:23.975421] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.518 [2024-06-11 06:10:23.975662] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.518 [2024-06-11 06:10:23.975764] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.518 [2024-06-11 06:10:23.975818] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.518 [2024-06-11 06:10:23.975845] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.518 [2024-06-11 06:10:23.975912] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.518 06:10:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.777 06:10:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:53.777 "name": "Existed_Raid", 00:17:53.777 "uuid": "ae1f9a48-6c9b-4708-9643-be573c45c590", 00:17:53.777 "strip_size_kb": 0, 00:17:53.777 "state": "configuring", 00:17:53.777 "raid_level": "raid1", 00:17:53.777 "superblock": true, 00:17:53.777 "num_base_bdevs": 3, 00:17:53.777 "num_base_bdevs_discovered": 0, 00:17:53.777 "num_base_bdevs_operational": 3, 00:17:53.777 "base_bdevs_list": [ 00:17:53.777 { 00:17:53.777 "name": "BaseBdev1", 00:17:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.777 "is_configured": false, 00:17:53.777 "data_offset": 0, 00:17:53.777 "data_size": 0 00:17:53.777 }, 00:17:53.777 { 00:17:53.777 "name": "BaseBdev2", 00:17:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.777 "is_configured": false, 00:17:53.777 "data_offset": 0, 00:17:53.777 "data_size": 0 00:17:53.777 }, 00:17:53.777 { 00:17:53.777 "name": "BaseBdev3", 00:17:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.777 "is_configured": false, 00:17:53.777 "data_offset": 0, 00:17:53.777 "data_size": 0 00:17:53.777 } 00:17:53.777 ] 00:17:53.777 }' 00:17:53.777 06:10:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:53.777 06:10:24 -- common/autotest_common.sh@10 -- # set +x 00:17:54.344 06:10:24 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.344 [2024-06-11 06:10:24.855440] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.344 [2024-06-11 06:10:24.855658] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:54.344 06:10:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:54.603 [2024-06-11 06:10:25.111558] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.603 [2024-06-11 06:10:25.111772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.603 [2024-06-11 06:10:25.111869] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.603 [2024-06-11 06:10:25.111936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.603 [2024-06-11 06:10:25.112011] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.603 [2024-06-11 06:10:25.112065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.604 06:10:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.862 [2024-06-11 06:10:25.319612] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.862 BaseBdev1 00:17:54.862 06:10:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:54.862 06:10:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:54.862 06:10:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:54.862 06:10:25 -- common/autotest_common.sh@889 -- # local i 00:17:54.862 06:10:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:54.862 06:10:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:54.862 06:10:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.862 06:10:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.121 [ 00:17:55.121 { 00:17:55.121 "name": "BaseBdev1", 00:17:55.121 "aliases": [ 00:17:55.121 "49a6854c-1ef5-4774-825d-1060c4b5d1aa" 00:17:55.121 ], 00:17:55.121 "product_name": "Malloc disk", 00:17:55.121 "block_size": 512, 00:17:55.121 "num_blocks": 65536, 00:17:55.121 "uuid": "49a6854c-1ef5-4774-825d-1060c4b5d1aa", 00:17:55.121 "assigned_rate_limits": { 00:17:55.121 "rw_ios_per_sec": 0, 00:17:55.121 "rw_mbytes_per_sec": 0, 00:17:55.121 "r_mbytes_per_sec": 0, 00:17:55.121 "w_mbytes_per_sec": 0 00:17:55.121 }, 00:17:55.121 "claimed": true, 00:17:55.121 "claim_type": "exclusive_write", 00:17:55.121 "zoned": false, 00:17:55.121 "supported_io_types": { 00:17:55.121 "read": true, 00:17:55.121 "write": true, 00:17:55.121 "unmap": true, 00:17:55.121 "write_zeroes": true, 00:17:55.121 "flush": true, 00:17:55.121 "reset": true, 00:17:55.121 "compare": false, 00:17:55.121 "compare_and_write": false, 00:17:55.121 "abort": true, 00:17:55.121 "nvme_admin": false, 00:17:55.121 "nvme_io": false 00:17:55.121 }, 00:17:55.121 "memory_domains": [ 00:17:55.121 { 00:17:55.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.121 "dma_device_type": 2 00:17:55.121 } 00:17:55.121 ], 00:17:55.121 "driver_specific": {} 00:17:55.121 } 00:17:55.121 ] 00:17:55.121 06:10:25 -- common/autotest_common.sh@895 -- # return 0 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.121 06:10:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.381 06:10:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.381 "name": "Existed_Raid", 00:17:55.381 "uuid": "02a21ab3-79ea-40f2-9572-b34d2322fde9", 00:17:55.381 "strip_size_kb": 0, 00:17:55.381 "state": "configuring", 00:17:55.381 "raid_level": "raid1", 00:17:55.381 "superblock": true, 00:17:55.381 "num_base_bdevs": 3, 00:17:55.381 "num_base_bdevs_discovered": 1, 00:17:55.381 "num_base_bdevs_operational": 3, 00:17:55.381 "base_bdevs_list": [ 00:17:55.381 { 00:17:55.381 "name": "BaseBdev1", 00:17:55.381 "uuid": "49a6854c-1ef5-4774-825d-1060c4b5d1aa", 00:17:55.381 "is_configured": true, 00:17:55.381 "data_offset": 2048, 00:17:55.381 "data_size": 63488 00:17:55.381 }, 00:17:55.381 { 00:17:55.381 "name": "BaseBdev2", 00:17:55.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.381 "is_configured": false, 00:17:55.381 "data_offset": 0, 00:17:55.381 "data_size": 0 00:17:55.381 }, 00:17:55.381 { 00:17:55.381 "name": "BaseBdev3", 00:17:55.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.381 "is_configured": false, 00:17:55.381 "data_offset": 0, 00:17:55.381 "data_size": 0 00:17:55.381 } 00:17:55.381 ] 00:17:55.381 }' 00:17:55.381 06:10:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.381 06:10:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.949 06:10:26 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:56.209 [2024-06-11 06:10:26.695874] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.209 [2024-06-11 06:10:26.696129] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:56.209 06:10:26 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:56.209 06:10:26 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:56.468 06:10:26 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:56.728 BaseBdev1 00:17:56.728 06:10:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:56.728 06:10:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:56.728 06:10:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:56.728 06:10:27 -- common/autotest_common.sh@889 -- # local i 00:17:56.728 06:10:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:56.728 06:10:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:56.728 06:10:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.987 06:10:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:56.987 [ 00:17:56.987 { 00:17:56.987 "name": "BaseBdev1", 00:17:56.987 "aliases": [ 00:17:56.987 "e193b24a-d3a3-4c2f-abbf-a6be3e44a9f3" 00:17:56.987 ], 00:17:56.987 "product_name": "Malloc disk", 00:17:56.987 "block_size": 512, 00:17:56.987 "num_blocks": 65536, 00:17:56.987 "uuid": "e193b24a-d3a3-4c2f-abbf-a6be3e44a9f3", 00:17:56.987 "assigned_rate_limits": { 00:17:56.987 "rw_ios_per_sec": 0, 00:17:56.987 "rw_mbytes_per_sec": 0, 00:17:56.987 "r_mbytes_per_sec": 0, 00:17:56.987 "w_mbytes_per_sec": 0 00:17:56.987 }, 00:17:56.987 "claimed": false, 00:17:56.987 "zoned": false, 00:17:56.987 "supported_io_types": { 00:17:56.987 "read": true, 00:17:56.987 "write": true, 00:17:56.987 "unmap": true, 00:17:56.987 "write_zeroes": true, 00:17:56.987 "flush": true, 00:17:56.987 "reset": true, 00:17:56.987 "compare": false, 00:17:56.987 "compare_and_write": false, 00:17:56.987 "abort": true, 00:17:56.987 "nvme_admin": false, 00:17:56.987 "nvme_io": false 00:17:56.987 }, 00:17:56.987 "memory_domains": [ 00:17:56.987 { 00:17:56.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.987 "dma_device_type": 2 00:17:56.987 } 00:17:56.987 ], 00:17:56.987 "driver_specific": {} 00:17:56.987 } 00:17:56.987 ] 00:17:56.987 06:10:27 -- common/autotest_common.sh@895 -- # return 0 00:17:56.987 06:10:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:57.247 [2024-06-11 06:10:27.766624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:57.247 [2024-06-11 06:10:27.769006] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.247 [2024-06-11 06:10:27.769169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.247 [2024-06-11 06:10:27.769255] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.247 [2024-06-11 06:10:27.769316] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.247 06:10:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.506 06:10:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.506 "name": "Existed_Raid", 00:17:57.506 "uuid": "7eaf4f4e-7eb3-4fdf-9cf2-b44108d89f1d", 00:17:57.506 "strip_size_kb": 0, 00:17:57.506 "state": "configuring", 00:17:57.506 "raid_level": "raid1", 00:17:57.506 "superblock": true, 00:17:57.506 "num_base_bdevs": 3, 00:17:57.506 "num_base_bdevs_discovered": 1, 00:17:57.506 "num_base_bdevs_operational": 3, 00:17:57.506 "base_bdevs_list": [ 00:17:57.506 { 00:17:57.506 "name": "BaseBdev1", 00:17:57.506 "uuid": "e193b24a-d3a3-4c2f-abbf-a6be3e44a9f3", 00:17:57.506 "is_configured": true, 00:17:57.506 "data_offset": 2048, 00:17:57.506 "data_size": 63488 00:17:57.506 }, 00:17:57.506 { 00:17:57.506 "name": "BaseBdev2", 00:17:57.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.506 "is_configured": false, 00:17:57.506 "data_offset": 0, 00:17:57.506 "data_size": 0 00:17:57.506 }, 00:17:57.506 { 00:17:57.506 "name": "BaseBdev3", 00:17:57.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.506 "is_configured": false, 00:17:57.506 "data_offset": 0, 00:17:57.506 "data_size": 0 00:17:57.506 } 00:17:57.506 ] 00:17:57.506 }' 00:17:57.506 06:10:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.506 06:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:58.074 06:10:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:58.334 [2024-06-11 06:10:28.733607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.334 BaseBdev2 00:17:58.334 06:10:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:58.334 06:10:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:58.334 06:10:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:58.334 06:10:28 -- common/autotest_common.sh@889 -- # local i 00:17:58.334 06:10:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:58.334 06:10:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:58.334 06:10:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.592 06:10:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.592 [ 00:17:58.592 { 00:17:58.592 "name": "BaseBdev2", 00:17:58.592 "aliases": [ 00:17:58.592 "daa260c7-12a0-4a73-b520-a7649dda357f" 00:17:58.592 ], 00:17:58.592 "product_name": "Malloc disk", 00:17:58.592 "block_size": 512, 00:17:58.592 "num_blocks": 65536, 00:17:58.592 "uuid": "daa260c7-12a0-4a73-b520-a7649dda357f", 00:17:58.592 "assigned_rate_limits": { 00:17:58.592 "rw_ios_per_sec": 0, 00:17:58.592 "rw_mbytes_per_sec": 0, 00:17:58.592 "r_mbytes_per_sec": 0, 00:17:58.592 "w_mbytes_per_sec": 0 00:17:58.592 }, 00:17:58.592 "claimed": true, 00:17:58.592 "claim_type": "exclusive_write", 00:17:58.592 "zoned": false, 00:17:58.592 "supported_io_types": { 00:17:58.592 "read": true, 00:17:58.592 "write": true, 00:17:58.592 "unmap": true, 00:17:58.592 "write_zeroes": true, 00:17:58.592 "flush": true, 00:17:58.592 "reset": true, 00:17:58.592 "compare": false, 00:17:58.592 "compare_and_write": false, 00:17:58.592 "abort": true, 00:17:58.592 "nvme_admin": false, 00:17:58.592 "nvme_io": false 00:17:58.592 }, 00:17:58.592 "memory_domains": [ 00:17:58.592 { 00:17:58.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.592 "dma_device_type": 2 00:17:58.592 } 00:17:58.592 ], 00:17:58.592 "driver_specific": {} 00:17:58.592 } 00:17:58.592 ] 00:17:58.592 06:10:29 -- common/autotest_common.sh@895 -- # return 0 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.592 06:10:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.851 06:10:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.851 "name": "Existed_Raid", 00:17:58.851 "uuid": "7eaf4f4e-7eb3-4fdf-9cf2-b44108d89f1d", 00:17:58.851 "strip_size_kb": 0, 00:17:58.851 "state": "configuring", 00:17:58.851 "raid_level": "raid1", 00:17:58.851 "superblock": true, 00:17:58.851 "num_base_bdevs": 3, 00:17:58.851 "num_base_bdevs_discovered": 2, 00:17:58.851 "num_base_bdevs_operational": 3, 00:17:58.851 "base_bdevs_list": [ 00:17:58.851 { 00:17:58.851 "name": "BaseBdev1", 00:17:58.851 "uuid": "e193b24a-d3a3-4c2f-abbf-a6be3e44a9f3", 00:17:58.851 "is_configured": true, 00:17:58.851 "data_offset": 2048, 00:17:58.851 "data_size": 63488 00:17:58.851 }, 00:17:58.851 { 00:17:58.851 "name": "BaseBdev2", 00:17:58.851 "uuid": "daa260c7-12a0-4a73-b520-a7649dda357f", 00:17:58.851 "is_configured": true, 00:17:58.851 "data_offset": 2048, 00:17:58.851 "data_size": 63488 00:17:58.851 }, 00:17:58.851 { 00:17:58.851 "name": "BaseBdev3", 00:17:58.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.851 "is_configured": false, 00:17:58.851 "data_offset": 0, 00:17:58.851 "data_size": 0 00:17:58.851 } 00:17:58.851 ] 00:17:58.851 }' 00:17:58.851 06:10:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.851 06:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:59.423 06:10:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:59.682 [2024-06-11 06:10:30.070031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:59.682 [2024-06-11 06:10:30.070545] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:59.682 [2024-06-11 06:10:30.070677] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:59.682 [2024-06-11 06:10:30.070888] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:59.682 [2024-06-11 06:10:30.071413] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:59.682 [2024-06-11 06:10:30.071535] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:59.682 [2024-06-11 06:10:30.071797] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.682 BaseBdev3 00:17:59.682 06:10:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:59.682 06:10:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:59.682 06:10:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:59.682 06:10:30 -- common/autotest_common.sh@889 -- # local i 00:17:59.682 06:10:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:59.682 06:10:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:59.682 06:10:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.941 06:10:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:59.941 [ 00:17:59.941 { 00:17:59.941 "name": "BaseBdev3", 00:17:59.941 "aliases": [ 00:17:59.941 "d96ef27e-a106-41c8-9ebd-1836d9ce95bd" 00:17:59.941 ], 00:17:59.941 "product_name": "Malloc disk", 00:17:59.941 "block_size": 512, 00:17:59.941 "num_blocks": 65536, 00:17:59.941 "uuid": "d96ef27e-a106-41c8-9ebd-1836d9ce95bd", 00:17:59.941 "assigned_rate_limits": { 00:17:59.941 "rw_ios_per_sec": 0, 00:17:59.941 "rw_mbytes_per_sec": 0, 00:17:59.941 "r_mbytes_per_sec": 0, 00:17:59.941 "w_mbytes_per_sec": 0 00:17:59.941 }, 00:17:59.941 "claimed": true, 00:17:59.941 "claim_type": "exclusive_write", 00:17:59.941 "zoned": false, 00:17:59.941 "supported_io_types": { 00:17:59.941 "read": true, 00:17:59.941 "write": true, 00:17:59.941 "unmap": true, 00:17:59.941 "write_zeroes": true, 00:17:59.941 "flush": true, 00:17:59.941 "reset": true, 00:17:59.941 "compare": false, 00:17:59.941 "compare_and_write": false, 00:17:59.941 "abort": true, 00:17:59.942 "nvme_admin": false, 00:17:59.942 "nvme_io": false 00:17:59.942 }, 00:17:59.942 "memory_domains": [ 00:17:59.942 { 00:17:59.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.942 "dma_device_type": 2 00:17:59.942 } 00:17:59.942 ], 00:17:59.942 "driver_specific": {} 00:17:59.942 } 00:17:59.942 ] 00:17:59.942 06:10:30 -- common/autotest_common.sh@895 -- # return 0 00:17:59.942 06:10:30 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:59.942 06:10:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:59.942 06:10:30 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:59.942 06:10:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.942 06:10:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:59.942 06:10:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.201 "name": "Existed_Raid", 00:18:00.201 "uuid": "7eaf4f4e-7eb3-4fdf-9cf2-b44108d89f1d", 00:18:00.201 "strip_size_kb": 0, 00:18:00.201 "state": "online", 00:18:00.201 "raid_level": "raid1", 00:18:00.201 "superblock": true, 00:18:00.201 "num_base_bdevs": 3, 00:18:00.201 "num_base_bdevs_discovered": 3, 00:18:00.201 "num_base_bdevs_operational": 3, 00:18:00.201 "base_bdevs_list": [ 00:18:00.201 { 00:18:00.201 "name": "BaseBdev1", 00:18:00.201 "uuid": "e193b24a-d3a3-4c2f-abbf-a6be3e44a9f3", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 2048, 00:18:00.201 "data_size": 63488 00:18:00.201 }, 00:18:00.201 { 00:18:00.201 "name": "BaseBdev2", 00:18:00.201 "uuid": "daa260c7-12a0-4a73-b520-a7649dda357f", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 2048, 00:18:00.201 "data_size": 63488 00:18:00.201 }, 00:18:00.201 { 00:18:00.201 "name": "BaseBdev3", 00:18:00.201 "uuid": "d96ef27e-a106-41c8-9ebd-1836d9ce95bd", 00:18:00.201 "is_configured": true, 00:18:00.201 "data_offset": 2048, 00:18:00.201 "data_size": 63488 00:18:00.201 } 00:18:00.201 ] 00:18:00.201 }' 00:18:00.201 06:10:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.201 06:10:30 -- common/autotest_common.sh@10 -- # set +x 00:18:00.770 06:10:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:01.068 [2024-06-11 06:10:31.485327] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.068 06:10:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.327 06:10:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.327 "name": "Existed_Raid", 00:18:01.327 "uuid": "7eaf4f4e-7eb3-4fdf-9cf2-b44108d89f1d", 00:18:01.327 "strip_size_kb": 0, 00:18:01.327 "state": "online", 00:18:01.327 "raid_level": "raid1", 00:18:01.327 "superblock": true, 00:18:01.327 "num_base_bdevs": 3, 00:18:01.327 "num_base_bdevs_discovered": 2, 00:18:01.327 "num_base_bdevs_operational": 2, 00:18:01.327 "base_bdevs_list": [ 00:18:01.327 { 00:18:01.327 "name": null, 00:18:01.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.327 "is_configured": false, 00:18:01.327 "data_offset": 2048, 00:18:01.327 "data_size": 63488 00:18:01.327 }, 00:18:01.327 { 00:18:01.327 "name": "BaseBdev2", 00:18:01.327 "uuid": "daa260c7-12a0-4a73-b520-a7649dda357f", 00:18:01.327 "is_configured": true, 00:18:01.327 "data_offset": 2048, 00:18:01.327 "data_size": 63488 00:18:01.327 }, 00:18:01.327 { 00:18:01.328 "name": "BaseBdev3", 00:18:01.328 "uuid": "d96ef27e-a106-41c8-9ebd-1836d9ce95bd", 00:18:01.328 "is_configured": true, 00:18:01.328 "data_offset": 2048, 00:18:01.328 "data_size": 63488 00:18:01.328 } 00:18:01.328 ] 00:18:01.328 }' 00:18:01.328 06:10:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.328 06:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.896 06:10:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:01.896 06:10:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.896 06:10:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.896 06:10:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:02.156 06:10:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:02.156 06:10:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.156 06:10:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:02.415 [2024-06-11 06:10:32.849508] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:02.415 06:10:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.415 06:10:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.415 06:10:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.415 06:10:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:02.683 06:10:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:02.683 06:10:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.683 06:10:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:03.033 [2024-06-11 06:10:33.396600] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:03.033 [2024-06-11 06:10:33.396858] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.033 [2024-06-11 06:10:33.397026] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.033 [2024-06-11 06:10:33.498959] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.033 [2024-06-11 06:10:33.499233] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:03.033 06:10:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:03.033 06:10:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:03.033 06:10:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.033 06:10:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:03.295 06:10:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:03.295 06:10:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:03.295 06:10:33 -- bdev/bdev_raid.sh@287 -- # killprocess 118094 00:18:03.295 06:10:33 -- common/autotest_common.sh@926 -- # '[' -z 118094 ']' 00:18:03.295 06:10:33 -- common/autotest_common.sh@930 -- # kill -0 118094 00:18:03.295 06:10:33 -- common/autotest_common.sh@931 -- # uname 00:18:03.295 06:10:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:03.295 06:10:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118094 00:18:03.295 06:10:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:03.295 06:10:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:03.295 06:10:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118094' 00:18:03.295 killing process with pid 118094 00:18:03.295 06:10:33 -- common/autotest_common.sh@945 -- # kill 118094 00:18:03.295 [2024-06-11 06:10:33.784730] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.295 06:10:33 -- common/autotest_common.sh@950 -- # wait 118094 00:18:03.295 [2024-06-11 06:10:33.785019] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:04.674 ************************************ 00:18:04.674 END TEST raid_state_function_test_sb 00:18:04.674 ************************************ 00:18:04.674 00:18:04.674 real 0m12.358s 00:18:04.674 user 0m20.451s 00:18:04.674 sys 0m2.119s 00:18:04.674 06:10:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:04.674 06:10:35 -- common/autotest_common.sh@10 -- # set +x 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:18:04.674 06:10:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:04.674 06:10:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:04.674 06:10:35 -- common/autotest_common.sh@10 -- # set +x 00:18:04.674 ************************************ 00:18:04.674 START TEST raid_superblock_test 00:18:04.674 ************************************ 00:18:04.674 06:10:35 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@357 -- # raid_pid=118474 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:04.674 06:10:35 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118474 /var/tmp/spdk-raid.sock 00:18:04.674 06:10:35 -- common/autotest_common.sh@819 -- # '[' -z 118474 ']' 00:18:04.674 06:10:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:04.674 06:10:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.674 06:10:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:04.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:04.674 06:10:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.674 06:10:35 -- common/autotest_common.sh@10 -- # set +x 00:18:04.674 [2024-06-11 06:10:35.311611] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:04.674 [2024-06-11 06:10:35.312057] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118474 ] 00:18:04.933 [2024-06-11 06:10:35.500722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.191 [2024-06-11 06:10:35.787910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.449 [2024-06-11 06:10:36.034950] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:06.385 06:10:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.385 06:10:36 -- common/autotest_common.sh@852 -- # return 0 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.385 06:10:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:06.644 malloc1 00:18:06.644 06:10:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:06.902 [2024-06-11 06:10:37.339150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:06.902 [2024-06-11 06:10:37.339450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.902 [2024-06-11 06:10:37.339606] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:06.902 [2024-06-11 06:10:37.339756] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.902 [2024-06-11 06:10:37.342528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.902 [2024-06-11 06:10:37.342683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:06.902 pt1 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.902 06:10:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:07.160 malloc2 00:18:07.160 06:10:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.160 [2024-06-11 06:10:37.801513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.160 [2024-06-11 06:10:37.801758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.160 [2024-06-11 06:10:37.801894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:07.160 [2024-06-11 06:10:37.802023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.419 [2024-06-11 06:10:37.804731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.419 [2024-06-11 06:10:37.804896] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.419 pt2 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:07.419 06:10:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:07.677 malloc3 00:18:07.677 06:10:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:07.936 [2024-06-11 06:10:38.345270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:07.936 [2024-06-11 06:10:38.345561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.936 [2024-06-11 06:10:38.345645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:07.936 [2024-06-11 06:10:38.345772] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.936 [2024-06-11 06:10:38.348408] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.936 [2024-06-11 06:10:38.348569] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:07.936 pt3 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:07.936 [2024-06-11 06:10:38.517525] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.936 [2024-06-11 06:10:38.519897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.936 [2024-06-11 06:10:38.520082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:07.936 [2024-06-11 06:10:38.520302] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:07.936 [2024-06-11 06:10:38.520393] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:07.936 [2024-06-11 06:10:38.520554] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:07.936 [2024-06-11 06:10:38.521016] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:07.936 [2024-06-11 06:10:38.521123] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:07.936 [2024-06-11 06:10:38.521385] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.936 06:10:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.937 06:10:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.937 06:10:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.937 06:10:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.196 06:10:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.196 "name": "raid_bdev1", 00:18:08.196 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:08.196 "strip_size_kb": 0, 00:18:08.196 "state": "online", 00:18:08.196 "raid_level": "raid1", 00:18:08.196 "superblock": true, 00:18:08.196 "num_base_bdevs": 3, 00:18:08.196 "num_base_bdevs_discovered": 3, 00:18:08.196 "num_base_bdevs_operational": 3, 00:18:08.196 "base_bdevs_list": [ 00:18:08.196 { 00:18:08.196 "name": "pt1", 00:18:08.196 "uuid": "db9d2ec1-4ba1-54de-bb2e-c15fae115b46", 00:18:08.196 "is_configured": true, 00:18:08.196 "data_offset": 2048, 00:18:08.196 "data_size": 63488 00:18:08.196 }, 00:18:08.196 { 00:18:08.196 "name": "pt2", 00:18:08.196 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:08.196 "is_configured": true, 00:18:08.196 "data_offset": 2048, 00:18:08.196 "data_size": 63488 00:18:08.196 }, 00:18:08.196 { 00:18:08.196 "name": "pt3", 00:18:08.196 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:08.196 "is_configured": true, 00:18:08.196 "data_offset": 2048, 00:18:08.196 "data_size": 63488 00:18:08.196 } 00:18:08.196 ] 00:18:08.196 }' 00:18:08.196 06:10:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.196 06:10:38 -- common/autotest_common.sh@10 -- # set +x 00:18:08.764 06:10:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:08.764 06:10:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:09.023 [2024-06-11 06:10:39.481832] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.023 06:10:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=38e2838a-f14d-475d-a398-9d7b131c801d 00:18:09.023 06:10:39 -- bdev/bdev_raid.sh@380 -- # '[' -z 38e2838a-f14d-475d-a398-9d7b131c801d ']' 00:18:09.023 06:10:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:09.023 [2024-06-11 06:10:39.657673] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.023 [2024-06-11 06:10:39.657858] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.023 [2024-06-11 06:10:39.658029] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.023 [2024-06-11 06:10:39.658220] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.023 [2024-06-11 06:10:39.658313] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:09.281 06:10:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.281 06:10:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:09.281 06:10:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:09.281 06:10:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:09.281 06:10:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.281 06:10:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:09.544 06:10:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.544 06:10:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:09.805 06:10:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:09.805 06:10:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:10.063 06:10:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:10.063 06:10:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:10.063 06:10:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:10.063 06:10:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:10.063 06:10:40 -- common/autotest_common.sh@640 -- # local es=0 00:18:10.063 06:10:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:10.063 06:10:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.063 06:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:10.063 06:10:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.063 06:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:10.063 06:10:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.063 06:10:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:10.063 06:10:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:10.063 06:10:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:10.063 06:10:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:10.322 [2024-06-11 06:10:40.849861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:10.322 [2024-06-11 06:10:40.852313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:10.322 [2024-06-11 06:10:40.852518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:10.322 [2024-06-11 06:10:40.852606] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:10.322 [2024-06-11 06:10:40.852801] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:10.322 [2024-06-11 06:10:40.852875] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:10.322 [2024-06-11 06:10:40.853000] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:10.322 [2024-06-11 06:10:40.853037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:18:10.322 request: 00:18:10.322 { 00:18:10.322 "name": "raid_bdev1", 00:18:10.322 "raid_level": "raid1", 00:18:10.322 "base_bdevs": [ 00:18:10.322 "malloc1", 00:18:10.322 "malloc2", 00:18:10.322 "malloc3" 00:18:10.322 ], 00:18:10.322 "superblock": false, 00:18:10.322 "method": "bdev_raid_create", 00:18:10.322 "req_id": 1 00:18:10.322 } 00:18:10.322 Got JSON-RPC error response 00:18:10.322 response: 00:18:10.322 { 00:18:10.322 "code": -17, 00:18:10.322 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:10.322 } 00:18:10.322 06:10:40 -- common/autotest_common.sh@643 -- # es=1 00:18:10.322 06:10:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:10.322 06:10:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:10.322 06:10:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:10.322 06:10:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.322 06:10:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:10.580 06:10:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:10.580 06:10:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:10.580 06:10:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:10.839 [2024-06-11 06:10:41.273895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:10.839 [2024-06-11 06:10:41.274146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.839 [2024-06-11 06:10:41.274309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:10.839 [2024-06-11 06:10:41.274407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.839 [2024-06-11 06:10:41.277114] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.839 [2024-06-11 06:10:41.277262] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:10.839 [2024-06-11 06:10:41.277489] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:10.839 [2024-06-11 06:10:41.277640] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:10.839 pt1 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.839 06:10:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.097 06:10:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.097 "name": "raid_bdev1", 00:18:11.097 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:11.097 "strip_size_kb": 0, 00:18:11.097 "state": "configuring", 00:18:11.097 "raid_level": "raid1", 00:18:11.097 "superblock": true, 00:18:11.097 "num_base_bdevs": 3, 00:18:11.097 "num_base_bdevs_discovered": 1, 00:18:11.097 "num_base_bdevs_operational": 3, 00:18:11.097 "base_bdevs_list": [ 00:18:11.097 { 00:18:11.097 "name": "pt1", 00:18:11.097 "uuid": "db9d2ec1-4ba1-54de-bb2e-c15fae115b46", 00:18:11.097 "is_configured": true, 00:18:11.097 "data_offset": 2048, 00:18:11.097 "data_size": 63488 00:18:11.097 }, 00:18:11.097 { 00:18:11.097 "name": null, 00:18:11.097 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:11.097 "is_configured": false, 00:18:11.097 "data_offset": 2048, 00:18:11.097 "data_size": 63488 00:18:11.097 }, 00:18:11.097 { 00:18:11.097 "name": null, 00:18:11.097 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:11.097 "is_configured": false, 00:18:11.097 "data_offset": 2048, 00:18:11.097 "data_size": 63488 00:18:11.097 } 00:18:11.097 ] 00:18:11.097 }' 00:18:11.097 06:10:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.097 06:10:41 -- common/autotest_common.sh@10 -- # set +x 00:18:11.664 06:10:42 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:18:11.664 06:10:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.922 [2024-06-11 06:10:42.342148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.922 [2024-06-11 06:10:42.342402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.922 [2024-06-11 06:10:42.342532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:18:11.922 [2024-06-11 06:10:42.342621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.922 [2024-06-11 06:10:42.343208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.922 [2024-06-11 06:10:42.343345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.922 [2024-06-11 06:10:42.343589] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:11.922 [2024-06-11 06:10:42.343695] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.922 pt2 00:18:11.922 06:10:42 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:12.181 [2024-06-11 06:10:42.606237] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.181 06:10:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.440 06:10:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.440 "name": "raid_bdev1", 00:18:12.440 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:12.440 "strip_size_kb": 0, 00:18:12.440 "state": "configuring", 00:18:12.440 "raid_level": "raid1", 00:18:12.440 "superblock": true, 00:18:12.440 "num_base_bdevs": 3, 00:18:12.440 "num_base_bdevs_discovered": 1, 00:18:12.440 "num_base_bdevs_operational": 3, 00:18:12.440 "base_bdevs_list": [ 00:18:12.440 { 00:18:12.440 "name": "pt1", 00:18:12.440 "uuid": "db9d2ec1-4ba1-54de-bb2e-c15fae115b46", 00:18:12.440 "is_configured": true, 00:18:12.440 "data_offset": 2048, 00:18:12.440 "data_size": 63488 00:18:12.440 }, 00:18:12.440 { 00:18:12.440 "name": null, 00:18:12.440 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:12.440 "is_configured": false, 00:18:12.440 "data_offset": 2048, 00:18:12.440 "data_size": 63488 00:18:12.440 }, 00:18:12.440 { 00:18:12.440 "name": null, 00:18:12.440 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:12.440 "is_configured": false, 00:18:12.440 "data_offset": 2048, 00:18:12.440 "data_size": 63488 00:18:12.440 } 00:18:12.440 ] 00:18:12.440 }' 00:18:12.440 06:10:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.440 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 06:10:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:13.008 06:10:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:13.008 06:10:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.008 [2024-06-11 06:10:43.618374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:13.008 [2024-06-11 06:10:43.618660] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.008 [2024-06-11 06:10:43.618735] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:13.008 [2024-06-11 06:10:43.618833] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.008 [2024-06-11 06:10:43.619460] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.008 [2024-06-11 06:10:43.619608] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:13.008 [2024-06-11 06:10:43.619874] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:13.009 [2024-06-11 06:10:43.619970] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:13.009 pt2 00:18:13.009 06:10:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:13.009 06:10:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:13.009 06:10:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:13.268 [2024-06-11 06:10:43.878435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:13.268 [2024-06-11 06:10:43.878705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.268 [2024-06-11 06:10:43.878778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:13.268 [2024-06-11 06:10:43.878884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.268 [2024-06-11 06:10:43.879409] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.268 [2024-06-11 06:10:43.879564] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:13.268 [2024-06-11 06:10:43.879776] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:13.268 [2024-06-11 06:10:43.879871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:13.268 [2024-06-11 06:10:43.880054] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:18:13.268 [2024-06-11 06:10:43.880152] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:13.268 [2024-06-11 06:10:43.880293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:13.268 [2024-06-11 06:10:43.880785] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:18:13.268 [2024-06-11 06:10:43.880906] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:18:13.268 [2024-06-11 06:10:43.881113] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.268 pt3 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.268 06:10:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.527 06:10:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.527 "name": "raid_bdev1", 00:18:13.527 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:13.527 "strip_size_kb": 0, 00:18:13.527 "state": "online", 00:18:13.527 "raid_level": "raid1", 00:18:13.527 "superblock": true, 00:18:13.527 "num_base_bdevs": 3, 00:18:13.527 "num_base_bdevs_discovered": 3, 00:18:13.527 "num_base_bdevs_operational": 3, 00:18:13.527 "base_bdevs_list": [ 00:18:13.527 { 00:18:13.527 "name": "pt1", 00:18:13.527 "uuid": "db9d2ec1-4ba1-54de-bb2e-c15fae115b46", 00:18:13.527 "is_configured": true, 00:18:13.527 "data_offset": 2048, 00:18:13.527 "data_size": 63488 00:18:13.527 }, 00:18:13.527 { 00:18:13.527 "name": "pt2", 00:18:13.527 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:13.527 "is_configured": true, 00:18:13.527 "data_offset": 2048, 00:18:13.527 "data_size": 63488 00:18:13.527 }, 00:18:13.527 { 00:18:13.527 "name": "pt3", 00:18:13.527 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:13.527 "is_configured": true, 00:18:13.527 "data_offset": 2048, 00:18:13.527 "data_size": 63488 00:18:13.527 } 00:18:13.527 ] 00:18:13.527 }' 00:18:13.527 06:10:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.527 06:10:44 -- common/autotest_common.sh@10 -- # set +x 00:18:14.095 06:10:44 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:14.095 06:10:44 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:14.354 [2024-06-11 06:10:44.742812] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.354 06:10:44 -- bdev/bdev_raid.sh@430 -- # '[' 38e2838a-f14d-475d-a398-9d7b131c801d '!=' 38e2838a-f14d-475d-a398-9d7b131c801d ']' 00:18:14.354 06:10:44 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:14.354 06:10:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:14.354 06:10:44 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:14.354 06:10:44 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:14.354 [2024-06-11 06:10:44.906697] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:14.354 06:10:44 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.355 06:10:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.613 06:10:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.613 "name": "raid_bdev1", 00:18:14.613 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:14.613 "strip_size_kb": 0, 00:18:14.613 "state": "online", 00:18:14.613 "raid_level": "raid1", 00:18:14.613 "superblock": true, 00:18:14.613 "num_base_bdevs": 3, 00:18:14.614 "num_base_bdevs_discovered": 2, 00:18:14.614 "num_base_bdevs_operational": 2, 00:18:14.614 "base_bdevs_list": [ 00:18:14.614 { 00:18:14.614 "name": null, 00:18:14.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.614 "is_configured": false, 00:18:14.614 "data_offset": 2048, 00:18:14.614 "data_size": 63488 00:18:14.614 }, 00:18:14.614 { 00:18:14.614 "name": "pt2", 00:18:14.614 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:14.614 "is_configured": true, 00:18:14.614 "data_offset": 2048, 00:18:14.614 "data_size": 63488 00:18:14.614 }, 00:18:14.614 { 00:18:14.614 "name": "pt3", 00:18:14.614 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:14.614 "is_configured": true, 00:18:14.614 "data_offset": 2048, 00:18:14.614 "data_size": 63488 00:18:14.614 } 00:18:14.614 ] 00:18:14.614 }' 00:18:14.614 06:10:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.614 06:10:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.181 06:10:45 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:15.444 [2024-06-11 06:10:45.954823] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.444 [2024-06-11 06:10:45.955009] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.444 [2024-06-11 06:10:45.955250] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.444 [2024-06-11 06:10:45.955406] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.444 [2024-06-11 06:10:45.955507] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:18:15.444 06:10:45 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.444 06:10:45 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:15.701 06:10:46 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:15.701 06:10:46 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:15.701 06:10:46 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:15.701 06:10:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:15.701 06:10:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:15.959 06:10:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:15.959 06:10:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:15.959 06:10:46 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.217 [2024-06-11 06:10:46.762967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.217 [2024-06-11 06:10:46.763262] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.217 [2024-06-11 06:10:46.763339] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:16.217 [2024-06-11 06:10:46.763441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.217 [2024-06-11 06:10:46.766194] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.217 [2024-06-11 06:10:46.766362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.217 [2024-06-11 06:10:46.766596] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:16.217 [2024-06-11 06:10:46.766729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.217 pt2 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.217 06:10:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.218 06:10:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.476 06:10:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.476 "name": "raid_bdev1", 00:18:16.476 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:16.476 "strip_size_kb": 0, 00:18:16.476 "state": "configuring", 00:18:16.476 "raid_level": "raid1", 00:18:16.476 "superblock": true, 00:18:16.476 "num_base_bdevs": 3, 00:18:16.476 "num_base_bdevs_discovered": 1, 00:18:16.476 "num_base_bdevs_operational": 2, 00:18:16.476 "base_bdevs_list": [ 00:18:16.476 { 00:18:16.476 "name": null, 00:18:16.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.476 "is_configured": false, 00:18:16.476 "data_offset": 2048, 00:18:16.476 "data_size": 63488 00:18:16.476 }, 00:18:16.476 { 00:18:16.476 "name": "pt2", 00:18:16.476 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:16.476 "is_configured": true, 00:18:16.476 "data_offset": 2048, 00:18:16.476 "data_size": 63488 00:18:16.476 }, 00:18:16.476 { 00:18:16.476 "name": null, 00:18:16.476 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:16.476 "is_configured": false, 00:18:16.476 "data_offset": 2048, 00:18:16.476 "data_size": 63488 00:18:16.476 } 00:18:16.476 ] 00:18:16.476 }' 00:18:16.476 06:10:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.476 06:10:47 -- common/autotest_common.sh@10 -- # set +x 00:18:17.043 06:10:47 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:17.043 06:10:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:17.043 06:10:47 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:17.043 06:10:47 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:17.302 [2024-06-11 06:10:47.847178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:17.302 [2024-06-11 06:10:47.847493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.302 [2024-06-11 06:10:47.847578] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:17.302 [2024-06-11 06:10:47.847674] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.302 [2024-06-11 06:10:47.848238] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.302 [2024-06-11 06:10:47.848376] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:17.302 [2024-06-11 06:10:47.848611] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:17.302 [2024-06-11 06:10:47.848706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:17.302 [2024-06-11 06:10:47.848877] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:18:17.302 [2024-06-11 06:10:47.849001] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:17.303 [2024-06-11 06:10:47.849138] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.303 [2024-06-11 06:10:47.849521] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:18:17.303 [2024-06-11 06:10:47.849627] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:18:17.303 [2024-06-11 06:10:47.849839] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.303 pt3 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.303 06:10:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.562 06:10:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.562 "name": "raid_bdev1", 00:18:17.562 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:17.562 "strip_size_kb": 0, 00:18:17.562 "state": "online", 00:18:17.562 "raid_level": "raid1", 00:18:17.562 "superblock": true, 00:18:17.562 "num_base_bdevs": 3, 00:18:17.562 "num_base_bdevs_discovered": 2, 00:18:17.562 "num_base_bdevs_operational": 2, 00:18:17.562 "base_bdevs_list": [ 00:18:17.562 { 00:18:17.562 "name": null, 00:18:17.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.562 "is_configured": false, 00:18:17.562 "data_offset": 2048, 00:18:17.562 "data_size": 63488 00:18:17.562 }, 00:18:17.562 { 00:18:17.562 "name": "pt2", 00:18:17.562 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:17.562 "is_configured": true, 00:18:17.562 "data_offset": 2048, 00:18:17.562 "data_size": 63488 00:18:17.562 }, 00:18:17.562 { 00:18:17.562 "name": "pt3", 00:18:17.562 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:17.562 "is_configured": true, 00:18:17.562 "data_offset": 2048, 00:18:17.562 "data_size": 63488 00:18:17.562 } 00:18:17.562 ] 00:18:17.562 }' 00:18:17.562 06:10:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.562 06:10:48 -- common/autotest_common.sh@10 -- # set +x 00:18:18.130 06:10:48 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:18.130 06:10:48 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:18.389 [2024-06-11 06:10:48.855348] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:18.389 [2024-06-11 06:10:48.855550] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:18.389 [2024-06-11 06:10:48.855768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.389 [2024-06-11 06:10:48.855866] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.389 [2024-06-11 06:10:48.856037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:18:18.389 06:10:48 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.389 06:10:48 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:18.649 [2024-06-11 06:10:49.267429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:18.649 [2024-06-11 06:10:49.267674] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:18.649 [2024-06-11 06:10:49.267753] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:18.649 [2024-06-11 06:10:49.267850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:18.649 [2024-06-11 06:10:49.270558] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:18.649 [2024-06-11 06:10:49.270742] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:18.649 [2024-06-11 06:10:49.270955] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:18.649 [2024-06-11 06:10:49.271107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:18.649 pt1 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.649 06:10:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.908 06:10:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.908 "name": "raid_bdev1", 00:18:18.908 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:18.908 "strip_size_kb": 0, 00:18:18.908 "state": "configuring", 00:18:18.908 "raid_level": "raid1", 00:18:18.908 "superblock": true, 00:18:18.908 "num_base_bdevs": 3, 00:18:18.908 "num_base_bdevs_discovered": 1, 00:18:18.908 "num_base_bdevs_operational": 3, 00:18:18.908 "base_bdevs_list": [ 00:18:18.908 { 00:18:18.908 "name": "pt1", 00:18:18.908 "uuid": "db9d2ec1-4ba1-54de-bb2e-c15fae115b46", 00:18:18.908 "is_configured": true, 00:18:18.908 "data_offset": 2048, 00:18:18.908 "data_size": 63488 00:18:18.908 }, 00:18:18.908 { 00:18:18.908 "name": null, 00:18:18.908 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:18.908 "is_configured": false, 00:18:18.908 "data_offset": 2048, 00:18:18.908 "data_size": 63488 00:18:18.908 }, 00:18:18.908 { 00:18:18.908 "name": null, 00:18:18.908 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:18.908 "is_configured": false, 00:18:18.908 "data_offset": 2048, 00:18:18.908 "data_size": 63488 00:18:18.908 } 00:18:18.908 ] 00:18:18.908 }' 00:18:18.908 06:10:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.908 06:10:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.484 06:10:50 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:19.484 06:10:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:19.484 06:10:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:19.743 06:10:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:19.743 06:10:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:19.743 06:10:50 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:20.002 06:10:50 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:20.002 06:10:50 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:20.002 06:10:50 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:20.002 06:10:50 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:20.261 [2024-06-11 06:10:50.696716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:20.261 [2024-06-11 06:10:50.696999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.261 [2024-06-11 06:10:50.697071] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:20.261 [2024-06-11 06:10:50.697180] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.261 [2024-06-11 06:10:50.697755] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.261 [2024-06-11 06:10:50.697906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:20.261 [2024-06-11 06:10:50.698142] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:20.261 [2024-06-11 06:10:50.698227] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:20.261 [2024-06-11 06:10:50.698311] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:20.261 [2024-06-11 06:10:50.698404] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:18:20.261 [2024-06-11 06:10:50.698542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:20.261 pt3 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.261 06:10:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.520 06:10:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.520 "name": "raid_bdev1", 00:18:20.520 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:20.521 "strip_size_kb": 0, 00:18:20.521 "state": "configuring", 00:18:20.521 "raid_level": "raid1", 00:18:20.521 "superblock": true, 00:18:20.521 "num_base_bdevs": 3, 00:18:20.521 "num_base_bdevs_discovered": 1, 00:18:20.521 "num_base_bdevs_operational": 2, 00:18:20.521 "base_bdevs_list": [ 00:18:20.521 { 00:18:20.521 "name": null, 00:18:20.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.521 "is_configured": false, 00:18:20.521 "data_offset": 2048, 00:18:20.521 "data_size": 63488 00:18:20.521 }, 00:18:20.521 { 00:18:20.521 "name": null, 00:18:20.521 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:20.521 "is_configured": false, 00:18:20.521 "data_offset": 2048, 00:18:20.521 "data_size": 63488 00:18:20.521 }, 00:18:20.521 { 00:18:20.521 "name": "pt3", 00:18:20.521 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:20.521 "is_configured": true, 00:18:20.521 "data_offset": 2048, 00:18:20.521 "data_size": 63488 00:18:20.521 } 00:18:20.521 ] 00:18:20.521 }' 00:18:20.521 06:10:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.521 06:10:50 -- common/autotest_common.sh@10 -- # set +x 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:21.089 [2024-06-11 06:10:51.704891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:21.089 [2024-06-11 06:10:51.705165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:21.089 [2024-06-11 06:10:51.705236] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:21.089 [2024-06-11 06:10:51.705332] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:21.089 [2024-06-11 06:10:51.705920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:21.089 [2024-06-11 06:10:51.706069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:21.089 [2024-06-11 06:10:51.706261] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:21.089 [2024-06-11 06:10:51.706391] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:21.089 [2024-06-11 06:10:51.706557] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:18:21.089 [2024-06-11 06:10:51.706681] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:21.089 [2024-06-11 06:10:51.706835] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:21.089 [2024-06-11 06:10:51.707256] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:18:21.089 [2024-06-11 06:10:51.707360] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:18:21.089 [2024-06-11 06:10:51.707561] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.089 pt2 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.089 06:10:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.663 06:10:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.663 "name": "raid_bdev1", 00:18:21.663 "uuid": "38e2838a-f14d-475d-a398-9d7b131c801d", 00:18:21.663 "strip_size_kb": 0, 00:18:21.664 "state": "online", 00:18:21.664 "raid_level": "raid1", 00:18:21.664 "superblock": true, 00:18:21.664 "num_base_bdevs": 3, 00:18:21.664 "num_base_bdevs_discovered": 2, 00:18:21.664 "num_base_bdevs_operational": 2, 00:18:21.664 "base_bdevs_list": [ 00:18:21.664 { 00:18:21.664 "name": null, 00:18:21.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.664 "is_configured": false, 00:18:21.664 "data_offset": 2048, 00:18:21.664 "data_size": 63488 00:18:21.664 }, 00:18:21.664 { 00:18:21.664 "name": "pt2", 00:18:21.664 "uuid": "4289a491-9eb9-559e-8503-0ded1fc95ee2", 00:18:21.664 "is_configured": true, 00:18:21.664 "data_offset": 2048, 00:18:21.664 "data_size": 63488 00:18:21.664 }, 00:18:21.664 { 00:18:21.664 "name": "pt3", 00:18:21.664 "uuid": "20893e8b-6232-5d3c-862d-cf3bb06bdae0", 00:18:21.664 "is_configured": true, 00:18:21.664 "data_offset": 2048, 00:18:21.664 "data_size": 63488 00:18:21.664 } 00:18:21.664 ] 00:18:21.664 }' 00:18:21.664 06:10:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.664 06:10:51 -- common/autotest_common.sh@10 -- # set +x 00:18:22.230 06:10:52 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:22.230 06:10:52 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:22.230 [2024-06-11 06:10:52.745318] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.230 06:10:52 -- bdev/bdev_raid.sh@506 -- # '[' 38e2838a-f14d-475d-a398-9d7b131c801d '!=' 38e2838a-f14d-475d-a398-9d7b131c801d ']' 00:18:22.230 06:10:52 -- bdev/bdev_raid.sh@511 -- # killprocess 118474 00:18:22.230 06:10:52 -- common/autotest_common.sh@926 -- # '[' -z 118474 ']' 00:18:22.230 06:10:52 -- common/autotest_common.sh@930 -- # kill -0 118474 00:18:22.230 06:10:52 -- common/autotest_common.sh@931 -- # uname 00:18:22.230 06:10:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:22.230 06:10:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118474 00:18:22.230 killing process with pid 118474 00:18:22.231 06:10:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:22.231 06:10:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:22.231 06:10:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118474' 00:18:22.231 06:10:52 -- common/autotest_common.sh@945 -- # kill 118474 00:18:22.231 06:10:52 -- common/autotest_common.sh@950 -- # wait 118474 00:18:22.231 [2024-06-11 06:10:52.788058] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.231 [2024-06-11 06:10:52.788141] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:22.231 [2024-06-11 06:10:52.788220] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:22.231 [2024-06-11 06:10:52.788325] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:18:22.490 [2024-06-11 06:10:53.090617] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:23.869 06:10:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:23.869 00:18:23.869 real 0m19.227s 00:18:23.869 user 0m32.941s 00:18:23.869 sys 0m3.333s 00:18:23.869 06:10:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.869 ************************************ 00:18:23.869 END TEST raid_superblock_test 00:18:23.869 ************************************ 00:18:23.869 06:10:54 -- common/autotest_common.sh@10 -- # set +x 00:18:23.869 06:10:54 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:23.869 06:10:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:23.869 06:10:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:23.869 06:10:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:23.869 06:10:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:23.869 06:10:54 -- common/autotest_common.sh@10 -- # set +x 00:18:24.128 ************************************ 00:18:24.128 START TEST raid_state_function_test 00:18:24.128 ************************************ 00:18:24.128 06:10:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=119087 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119087' 00:18:24.128 Process raid pid: 119087 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:24.128 06:10:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119087 /var/tmp/spdk-raid.sock 00:18:24.128 06:10:54 -- common/autotest_common.sh@819 -- # '[' -z 119087 ']' 00:18:24.128 06:10:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:24.128 06:10:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:24.128 06:10:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:24.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:24.129 06:10:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:24.129 06:10:54 -- common/autotest_common.sh@10 -- # set +x 00:18:24.129 [2024-06-11 06:10:54.619607] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:24.129 [2024-06-11 06:10:54.620047] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.388 [2024-06-11 06:10:54.803892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.388 [2024-06-11 06:10:55.027769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.647 [2024-06-11 06:10:55.273168] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.906 06:10:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:24.906 06:10:55 -- common/autotest_common.sh@852 -- # return 0 00:18:24.906 06:10:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:25.165 [2024-06-11 06:10:55.764143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.165 [2024-06-11 06:10:55.764403] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.165 [2024-06-11 06:10:55.764507] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.165 [2024-06-11 06:10:55.764561] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.165 [2024-06-11 06:10:55.764587] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.165 [2024-06-11 06:10:55.764645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.165 [2024-06-11 06:10:55.764725] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:25.165 [2024-06-11 06:10:55.764776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.165 06:10:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.425 06:10:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.425 "name": "Existed_Raid", 00:18:25.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.425 "strip_size_kb": 64, 00:18:25.425 "state": "configuring", 00:18:25.425 "raid_level": "raid0", 00:18:25.425 "superblock": false, 00:18:25.425 "num_base_bdevs": 4, 00:18:25.425 "num_base_bdevs_discovered": 0, 00:18:25.425 "num_base_bdevs_operational": 4, 00:18:25.425 "base_bdevs_list": [ 00:18:25.425 { 00:18:25.425 "name": "BaseBdev1", 00:18:25.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.425 "is_configured": false, 00:18:25.425 "data_offset": 0, 00:18:25.425 "data_size": 0 00:18:25.425 }, 00:18:25.425 { 00:18:25.425 "name": "BaseBdev2", 00:18:25.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.425 "is_configured": false, 00:18:25.425 "data_offset": 0, 00:18:25.425 "data_size": 0 00:18:25.425 }, 00:18:25.425 { 00:18:25.425 "name": "BaseBdev3", 00:18:25.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.425 "is_configured": false, 00:18:25.425 "data_offset": 0, 00:18:25.425 "data_size": 0 00:18:25.425 }, 00:18:25.425 { 00:18:25.425 "name": "BaseBdev4", 00:18:25.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.425 "is_configured": false, 00:18:25.425 "data_offset": 0, 00:18:25.425 "data_size": 0 00:18:25.425 } 00:18:25.425 ] 00:18:25.425 }' 00:18:25.425 06:10:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.425 06:10:56 -- common/autotest_common.sh@10 -- # set +x 00:18:25.993 06:10:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:26.252 [2024-06-11 06:10:56.789187] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.252 [2024-06-11 06:10:56.789429] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:26.252 06:10:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:26.511 [2024-06-11 06:10:57.017256] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.511 [2024-06-11 06:10:57.017553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.511 [2024-06-11 06:10:57.017674] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.511 [2024-06-11 06:10:57.017737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.511 [2024-06-11 06:10:57.017766] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.511 [2024-06-11 06:10:57.017887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.511 [2024-06-11 06:10:57.017920] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:26.511 [2024-06-11 06:10:57.017964] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:26.511 06:10:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:26.770 [2024-06-11 06:10:57.233485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.770 BaseBdev1 00:18:26.770 06:10:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:26.770 06:10:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:26.770 06:10:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.770 06:10:57 -- common/autotest_common.sh@889 -- # local i 00:18:26.770 06:10:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.770 06:10:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.770 06:10:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.029 06:10:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:27.029 [ 00:18:27.029 { 00:18:27.029 "name": "BaseBdev1", 00:18:27.029 "aliases": [ 00:18:27.029 "0eca8187-1472-43a4-b146-e9e24fe12056" 00:18:27.029 ], 00:18:27.029 "product_name": "Malloc disk", 00:18:27.029 "block_size": 512, 00:18:27.029 "num_blocks": 65536, 00:18:27.029 "uuid": "0eca8187-1472-43a4-b146-e9e24fe12056", 00:18:27.029 "assigned_rate_limits": { 00:18:27.029 "rw_ios_per_sec": 0, 00:18:27.029 "rw_mbytes_per_sec": 0, 00:18:27.029 "r_mbytes_per_sec": 0, 00:18:27.029 "w_mbytes_per_sec": 0 00:18:27.029 }, 00:18:27.029 "claimed": true, 00:18:27.029 "claim_type": "exclusive_write", 00:18:27.029 "zoned": false, 00:18:27.029 "supported_io_types": { 00:18:27.029 "read": true, 00:18:27.029 "write": true, 00:18:27.029 "unmap": true, 00:18:27.029 "write_zeroes": true, 00:18:27.029 "flush": true, 00:18:27.029 "reset": true, 00:18:27.029 "compare": false, 00:18:27.029 "compare_and_write": false, 00:18:27.029 "abort": true, 00:18:27.029 "nvme_admin": false, 00:18:27.029 "nvme_io": false 00:18:27.029 }, 00:18:27.029 "memory_domains": [ 00:18:27.029 { 00:18:27.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.029 "dma_device_type": 2 00:18:27.029 } 00:18:27.029 ], 00:18:27.029 "driver_specific": {} 00:18:27.029 } 00:18:27.029 ] 00:18:27.029 06:10:57 -- common/autotest_common.sh@895 -- # return 0 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.029 06:10:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.288 06:10:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.289 "name": "Existed_Raid", 00:18:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.289 "strip_size_kb": 64, 00:18:27.289 "state": "configuring", 00:18:27.289 "raid_level": "raid0", 00:18:27.289 "superblock": false, 00:18:27.289 "num_base_bdevs": 4, 00:18:27.289 "num_base_bdevs_discovered": 1, 00:18:27.289 "num_base_bdevs_operational": 4, 00:18:27.289 "base_bdevs_list": [ 00:18:27.289 { 00:18:27.289 "name": "BaseBdev1", 00:18:27.289 "uuid": "0eca8187-1472-43a4-b146-e9e24fe12056", 00:18:27.289 "is_configured": true, 00:18:27.289 "data_offset": 0, 00:18:27.289 "data_size": 65536 00:18:27.289 }, 00:18:27.289 { 00:18:27.289 "name": "BaseBdev2", 00:18:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.289 "is_configured": false, 00:18:27.289 "data_offset": 0, 00:18:27.289 "data_size": 0 00:18:27.289 }, 00:18:27.289 { 00:18:27.289 "name": "BaseBdev3", 00:18:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.289 "is_configured": false, 00:18:27.289 "data_offset": 0, 00:18:27.289 "data_size": 0 00:18:27.289 }, 00:18:27.289 { 00:18:27.289 "name": "BaseBdev4", 00:18:27.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.289 "is_configured": false, 00:18:27.289 "data_offset": 0, 00:18:27.289 "data_size": 0 00:18:27.289 } 00:18:27.289 ] 00:18:27.289 }' 00:18:27.289 06:10:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.289 06:10:57 -- common/autotest_common.sh@10 -- # set +x 00:18:27.857 06:10:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:28.116 [2024-06-11 06:10:58.637803] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.116 [2024-06-11 06:10:58.638008] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:28.116 06:10:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:28.116 06:10:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:28.375 [2024-06-11 06:10:58.905923] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.375 [2024-06-11 06:10:58.908304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.375 [2024-06-11 06:10:58.908519] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.375 [2024-06-11 06:10:58.908607] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.375 [2024-06-11 06:10:58.908666] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.375 [2024-06-11 06:10:58.908693] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.375 [2024-06-11 06:10:58.908785] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.375 06:10:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.633 06:10:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.633 "name": "Existed_Raid", 00:18:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.633 "strip_size_kb": 64, 00:18:28.633 "state": "configuring", 00:18:28.633 "raid_level": "raid0", 00:18:28.633 "superblock": false, 00:18:28.633 "num_base_bdevs": 4, 00:18:28.633 "num_base_bdevs_discovered": 1, 00:18:28.633 "num_base_bdevs_operational": 4, 00:18:28.633 "base_bdevs_list": [ 00:18:28.633 { 00:18:28.633 "name": "BaseBdev1", 00:18:28.633 "uuid": "0eca8187-1472-43a4-b146-e9e24fe12056", 00:18:28.633 "is_configured": true, 00:18:28.633 "data_offset": 0, 00:18:28.633 "data_size": 65536 00:18:28.633 }, 00:18:28.633 { 00:18:28.633 "name": "BaseBdev2", 00:18:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.633 "is_configured": false, 00:18:28.633 "data_offset": 0, 00:18:28.633 "data_size": 0 00:18:28.633 }, 00:18:28.633 { 00:18:28.633 "name": "BaseBdev3", 00:18:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.633 "is_configured": false, 00:18:28.633 "data_offset": 0, 00:18:28.633 "data_size": 0 00:18:28.633 }, 00:18:28.633 { 00:18:28.633 "name": "BaseBdev4", 00:18:28.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.633 "is_configured": false, 00:18:28.633 "data_offset": 0, 00:18:28.633 "data_size": 0 00:18:28.633 } 00:18:28.633 ] 00:18:28.633 }' 00:18:28.633 06:10:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.633 06:10:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.200 06:10:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:29.459 [2024-06-11 06:10:59.988988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.459 BaseBdev2 00:18:29.459 06:11:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:29.459 06:11:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:29.459 06:11:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:29.459 06:11:00 -- common/autotest_common.sh@889 -- # local i 00:18:29.459 06:11:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:29.459 06:11:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:29.459 06:11:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.718 06:11:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:29.718 [ 00:18:29.718 { 00:18:29.718 "name": "BaseBdev2", 00:18:29.718 "aliases": [ 00:18:29.718 "2cae5b93-2f93-43db-89d4-77bec549aa2d" 00:18:29.718 ], 00:18:29.718 "product_name": "Malloc disk", 00:18:29.718 "block_size": 512, 00:18:29.718 "num_blocks": 65536, 00:18:29.718 "uuid": "2cae5b93-2f93-43db-89d4-77bec549aa2d", 00:18:29.718 "assigned_rate_limits": { 00:18:29.718 "rw_ios_per_sec": 0, 00:18:29.718 "rw_mbytes_per_sec": 0, 00:18:29.718 "r_mbytes_per_sec": 0, 00:18:29.718 "w_mbytes_per_sec": 0 00:18:29.718 }, 00:18:29.718 "claimed": true, 00:18:29.718 "claim_type": "exclusive_write", 00:18:29.718 "zoned": false, 00:18:29.718 "supported_io_types": { 00:18:29.718 "read": true, 00:18:29.718 "write": true, 00:18:29.718 "unmap": true, 00:18:29.718 "write_zeroes": true, 00:18:29.718 "flush": true, 00:18:29.718 "reset": true, 00:18:29.718 "compare": false, 00:18:29.718 "compare_and_write": false, 00:18:29.718 "abort": true, 00:18:29.718 "nvme_admin": false, 00:18:29.718 "nvme_io": false 00:18:29.718 }, 00:18:29.718 "memory_domains": [ 00:18:29.718 { 00:18:29.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.718 "dma_device_type": 2 00:18:29.718 } 00:18:29.718 ], 00:18:29.718 "driver_specific": {} 00:18:29.718 } 00:18:29.718 ] 00:18:29.718 06:11:00 -- common/autotest_common.sh@895 -- # return 0 00:18:29.718 06:11:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:29.718 06:11:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:29.718 06:11:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.978 "name": "Existed_Raid", 00:18:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.978 "strip_size_kb": 64, 00:18:29.978 "state": "configuring", 00:18:29.978 "raid_level": "raid0", 00:18:29.978 "superblock": false, 00:18:29.978 "num_base_bdevs": 4, 00:18:29.978 "num_base_bdevs_discovered": 2, 00:18:29.978 "num_base_bdevs_operational": 4, 00:18:29.978 "base_bdevs_list": [ 00:18:29.978 { 00:18:29.978 "name": "BaseBdev1", 00:18:29.978 "uuid": "0eca8187-1472-43a4-b146-e9e24fe12056", 00:18:29.978 "is_configured": true, 00:18:29.978 "data_offset": 0, 00:18:29.978 "data_size": 65536 00:18:29.978 }, 00:18:29.978 { 00:18:29.978 "name": "BaseBdev2", 00:18:29.978 "uuid": "2cae5b93-2f93-43db-89d4-77bec549aa2d", 00:18:29.978 "is_configured": true, 00:18:29.978 "data_offset": 0, 00:18:29.978 "data_size": 65536 00:18:29.978 }, 00:18:29.978 { 00:18:29.978 "name": "BaseBdev3", 00:18:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.978 "is_configured": false, 00:18:29.978 "data_offset": 0, 00:18:29.978 "data_size": 0 00:18:29.978 }, 00:18:29.978 { 00:18:29.978 "name": "BaseBdev4", 00:18:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.978 "is_configured": false, 00:18:29.978 "data_offset": 0, 00:18:29.978 "data_size": 0 00:18:29.978 } 00:18:29.978 ] 00:18:29.978 }' 00:18:29.978 06:11:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.978 06:11:00 -- common/autotest_common.sh@10 -- # set +x 00:18:30.546 06:11:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:30.806 [2024-06-11 06:11:01.350337] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.806 BaseBdev3 00:18:30.806 06:11:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:30.806 06:11:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:30.806 06:11:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:30.806 06:11:01 -- common/autotest_common.sh@889 -- # local i 00:18:30.806 06:11:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:30.806 06:11:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:30.806 06:11:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.065 06:11:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:31.324 [ 00:18:31.324 { 00:18:31.324 "name": "BaseBdev3", 00:18:31.324 "aliases": [ 00:18:31.324 "56750848-1406-4c66-9b89-daa0bcbcf053" 00:18:31.324 ], 00:18:31.324 "product_name": "Malloc disk", 00:18:31.324 "block_size": 512, 00:18:31.324 "num_blocks": 65536, 00:18:31.324 "uuid": "56750848-1406-4c66-9b89-daa0bcbcf053", 00:18:31.324 "assigned_rate_limits": { 00:18:31.324 "rw_ios_per_sec": 0, 00:18:31.324 "rw_mbytes_per_sec": 0, 00:18:31.324 "r_mbytes_per_sec": 0, 00:18:31.324 "w_mbytes_per_sec": 0 00:18:31.324 }, 00:18:31.324 "claimed": true, 00:18:31.324 "claim_type": "exclusive_write", 00:18:31.324 "zoned": false, 00:18:31.324 "supported_io_types": { 00:18:31.324 "read": true, 00:18:31.324 "write": true, 00:18:31.324 "unmap": true, 00:18:31.324 "write_zeroes": true, 00:18:31.324 "flush": true, 00:18:31.324 "reset": true, 00:18:31.324 "compare": false, 00:18:31.324 "compare_and_write": false, 00:18:31.325 "abort": true, 00:18:31.325 "nvme_admin": false, 00:18:31.325 "nvme_io": false 00:18:31.325 }, 00:18:31.325 "memory_domains": [ 00:18:31.325 { 00:18:31.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.325 "dma_device_type": 2 00:18:31.325 } 00:18:31.325 ], 00:18:31.325 "driver_specific": {} 00:18:31.325 } 00:18:31.325 ] 00:18:31.325 06:11:01 -- common/autotest_common.sh@895 -- # return 0 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.325 06:11:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.583 06:11:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.583 "name": "Existed_Raid", 00:18:31.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.584 "strip_size_kb": 64, 00:18:31.584 "state": "configuring", 00:18:31.584 "raid_level": "raid0", 00:18:31.584 "superblock": false, 00:18:31.584 "num_base_bdevs": 4, 00:18:31.584 "num_base_bdevs_discovered": 3, 00:18:31.584 "num_base_bdevs_operational": 4, 00:18:31.584 "base_bdevs_list": [ 00:18:31.584 { 00:18:31.584 "name": "BaseBdev1", 00:18:31.584 "uuid": "0eca8187-1472-43a4-b146-e9e24fe12056", 00:18:31.584 "is_configured": true, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 }, 00:18:31.584 { 00:18:31.584 "name": "BaseBdev2", 00:18:31.584 "uuid": "2cae5b93-2f93-43db-89d4-77bec549aa2d", 00:18:31.584 "is_configured": true, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 }, 00:18:31.584 { 00:18:31.584 "name": "BaseBdev3", 00:18:31.584 "uuid": "56750848-1406-4c66-9b89-daa0bcbcf053", 00:18:31.584 "is_configured": true, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 65536 00:18:31.584 }, 00:18:31.584 { 00:18:31.584 "name": "BaseBdev4", 00:18:31.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.584 "is_configured": false, 00:18:31.584 "data_offset": 0, 00:18:31.584 "data_size": 0 00:18:31.584 } 00:18:31.584 ] 00:18:31.584 }' 00:18:31.584 06:11:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.584 06:11:01 -- common/autotest_common.sh@10 -- # set +x 00:18:32.151 06:11:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:32.409 [2024-06-11 06:11:02.828878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:32.409 [2024-06-11 06:11:02.829148] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:32.409 [2024-06-11 06:11:02.829190] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:32.409 [2024-06-11 06:11:02.829436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:32.409 [2024-06-11 06:11:02.829914] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:32.409 [2024-06-11 06:11:02.830025] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:32.409 [2024-06-11 06:11:02.830386] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.409 BaseBdev4 00:18:32.409 06:11:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:32.409 06:11:02 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:32.409 06:11:02 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:32.409 06:11:02 -- common/autotest_common.sh@889 -- # local i 00:18:32.409 06:11:02 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:32.409 06:11:02 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:32.409 06:11:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.409 06:11:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:32.668 [ 00:18:32.668 { 00:18:32.668 "name": "BaseBdev4", 00:18:32.668 "aliases": [ 00:18:32.668 "7e0361a6-a8fe-4d9e-9a57-76dbf362f3d7" 00:18:32.668 ], 00:18:32.668 "product_name": "Malloc disk", 00:18:32.668 "block_size": 512, 00:18:32.668 "num_blocks": 65536, 00:18:32.668 "uuid": "7e0361a6-a8fe-4d9e-9a57-76dbf362f3d7", 00:18:32.668 "assigned_rate_limits": { 00:18:32.668 "rw_ios_per_sec": 0, 00:18:32.668 "rw_mbytes_per_sec": 0, 00:18:32.668 "r_mbytes_per_sec": 0, 00:18:32.668 "w_mbytes_per_sec": 0 00:18:32.668 }, 00:18:32.668 "claimed": true, 00:18:32.668 "claim_type": "exclusive_write", 00:18:32.668 "zoned": false, 00:18:32.668 "supported_io_types": { 00:18:32.668 "read": true, 00:18:32.668 "write": true, 00:18:32.668 "unmap": true, 00:18:32.668 "write_zeroes": true, 00:18:32.668 "flush": true, 00:18:32.668 "reset": true, 00:18:32.668 "compare": false, 00:18:32.668 "compare_and_write": false, 00:18:32.668 "abort": true, 00:18:32.668 "nvme_admin": false, 00:18:32.668 "nvme_io": false 00:18:32.668 }, 00:18:32.668 "memory_domains": [ 00:18:32.668 { 00:18:32.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.668 "dma_device_type": 2 00:18:32.668 } 00:18:32.668 ], 00:18:32.668 "driver_specific": {} 00:18:32.668 } 00:18:32.668 ] 00:18:32.668 06:11:03 -- common/autotest_common.sh@895 -- # return 0 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.668 06:11:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.927 06:11:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.927 "name": "Existed_Raid", 00:18:32.927 "uuid": "d2e27168-f913-4b9a-8a6a-22b21a111844", 00:18:32.927 "strip_size_kb": 64, 00:18:32.927 "state": "online", 00:18:32.927 "raid_level": "raid0", 00:18:32.927 "superblock": false, 00:18:32.927 "num_base_bdevs": 4, 00:18:32.927 "num_base_bdevs_discovered": 4, 00:18:32.927 "num_base_bdevs_operational": 4, 00:18:32.927 "base_bdevs_list": [ 00:18:32.927 { 00:18:32.927 "name": "BaseBdev1", 00:18:32.927 "uuid": "0eca8187-1472-43a4-b146-e9e24fe12056", 00:18:32.927 "is_configured": true, 00:18:32.927 "data_offset": 0, 00:18:32.927 "data_size": 65536 00:18:32.927 }, 00:18:32.927 { 00:18:32.927 "name": "BaseBdev2", 00:18:32.927 "uuid": "2cae5b93-2f93-43db-89d4-77bec549aa2d", 00:18:32.927 "is_configured": true, 00:18:32.927 "data_offset": 0, 00:18:32.927 "data_size": 65536 00:18:32.927 }, 00:18:32.927 { 00:18:32.927 "name": "BaseBdev3", 00:18:32.927 "uuid": "56750848-1406-4c66-9b89-daa0bcbcf053", 00:18:32.927 "is_configured": true, 00:18:32.927 "data_offset": 0, 00:18:32.927 "data_size": 65536 00:18:32.927 }, 00:18:32.927 { 00:18:32.927 "name": "BaseBdev4", 00:18:32.927 "uuid": "7e0361a6-a8fe-4d9e-9a57-76dbf362f3d7", 00:18:32.927 "is_configured": true, 00:18:32.927 "data_offset": 0, 00:18:32.927 "data_size": 65536 00:18:32.927 } 00:18:32.927 ] 00:18:32.927 }' 00:18:32.927 06:11:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.927 06:11:03 -- common/autotest_common.sh@10 -- # set +x 00:18:33.496 06:11:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:33.755 [2024-06-11 06:11:04.157270] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.755 [2024-06-11 06:11:04.157489] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.755 [2024-06-11 06:11:04.157706] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.755 06:11:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.013 06:11:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.013 "name": "Existed_Raid", 00:18:34.013 "uuid": "d2e27168-f913-4b9a-8a6a-22b21a111844", 00:18:34.013 "strip_size_kb": 64, 00:18:34.013 "state": "offline", 00:18:34.013 "raid_level": "raid0", 00:18:34.013 "superblock": false, 00:18:34.013 "num_base_bdevs": 4, 00:18:34.013 "num_base_bdevs_discovered": 3, 00:18:34.013 "num_base_bdevs_operational": 3, 00:18:34.013 "base_bdevs_list": [ 00:18:34.013 { 00:18:34.013 "name": null, 00:18:34.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.013 "is_configured": false, 00:18:34.013 "data_offset": 0, 00:18:34.013 "data_size": 65536 00:18:34.013 }, 00:18:34.013 { 00:18:34.013 "name": "BaseBdev2", 00:18:34.013 "uuid": "2cae5b93-2f93-43db-89d4-77bec549aa2d", 00:18:34.013 "is_configured": true, 00:18:34.013 "data_offset": 0, 00:18:34.013 "data_size": 65536 00:18:34.013 }, 00:18:34.013 { 00:18:34.013 "name": "BaseBdev3", 00:18:34.013 "uuid": "56750848-1406-4c66-9b89-daa0bcbcf053", 00:18:34.013 "is_configured": true, 00:18:34.013 "data_offset": 0, 00:18:34.013 "data_size": 65536 00:18:34.013 }, 00:18:34.013 { 00:18:34.013 "name": "BaseBdev4", 00:18:34.013 "uuid": "7e0361a6-a8fe-4d9e-9a57-76dbf362f3d7", 00:18:34.013 "is_configured": true, 00:18:34.013 "data_offset": 0, 00:18:34.013 "data_size": 65536 00:18:34.013 } 00:18:34.013 ] 00:18:34.013 }' 00:18:34.013 06:11:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.013 06:11:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.636 06:11:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:34.636 06:11:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:34.636 06:11:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:34.636 06:11:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.895 06:11:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:34.895 06:11:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.895 06:11:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:34.895 [2024-06-11 06:11:05.508016] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.154 06:11:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:35.154 06:11:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:35.154 06:11:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.154 06:11:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:35.414 06:11:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:35.414 06:11:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.414 06:11:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:35.414 [2024-06-11 06:11:06.031861] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:35.672 06:11:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:35.672 06:11:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:35.672 06:11:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:35.672 06:11:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.932 06:11:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:35.932 06:11:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.932 06:11:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:35.932 [2024-06-11 06:11:06.542435] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:35.932 [2024-06-11 06:11:06.542711] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:36.191 06:11:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:36.191 06:11:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:36.191 06:11:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.191 06:11:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:36.449 06:11:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:36.449 06:11:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:36.449 06:11:06 -- bdev/bdev_raid.sh@287 -- # killprocess 119087 00:18:36.449 06:11:06 -- common/autotest_common.sh@926 -- # '[' -z 119087 ']' 00:18:36.449 06:11:06 -- common/autotest_common.sh@930 -- # kill -0 119087 00:18:36.449 06:11:06 -- common/autotest_common.sh@931 -- # uname 00:18:36.449 06:11:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:36.449 06:11:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119087 00:18:36.449 killing process with pid 119087 00:18:36.449 06:11:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:36.449 06:11:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:36.449 06:11:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119087' 00:18:36.449 06:11:06 -- common/autotest_common.sh@945 -- # kill 119087 00:18:36.449 06:11:06 -- common/autotest_common.sh@950 -- # wait 119087 00:18:36.449 [2024-06-11 06:11:06.924030] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.449 [2024-06-11 06:11:06.924176] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.827 ************************************ 00:18:37.827 END TEST raid_state_function_test 00:18:37.827 ************************************ 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:37.827 00:18:37.827 real 0m13.774s 00:18:37.827 user 0m23.163s 00:18:37.827 sys 0m2.387s 00:18:37.827 06:11:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.827 06:11:08 -- common/autotest_common.sh@10 -- # set +x 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:37.827 06:11:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:37.827 06:11:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:37.827 06:11:08 -- common/autotest_common.sh@10 -- # set +x 00:18:37.827 ************************************ 00:18:37.827 START TEST raid_state_function_test_sb 00:18:37.827 ************************************ 00:18:37.827 06:11:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=119518 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119518' 00:18:37.827 Process raid pid: 119518 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119518 /var/tmp/spdk-raid.sock 00:18:37.827 06:11:08 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:37.827 06:11:08 -- common/autotest_common.sh@819 -- # '[' -z 119518 ']' 00:18:37.827 06:11:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:37.827 06:11:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:37.827 06:11:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:37.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:37.827 06:11:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:37.827 06:11:08 -- common/autotest_common.sh@10 -- # set +x 00:18:37.827 [2024-06-11 06:11:08.466678] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:37.827 [2024-06-11 06:11:08.467132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.086 [2024-06-11 06:11:08.655719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.345 [2024-06-11 06:11:08.935111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.604 [2024-06-11 06:11:09.186585] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.863 06:11:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:38.863 06:11:09 -- common/autotest_common.sh@852 -- # return 0 00:18:38.863 06:11:09 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:39.122 [2024-06-11 06:11:09.608138] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.122 [2024-06-11 06:11:09.608417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.122 [2024-06-11 06:11:09.608520] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.122 [2024-06-11 06:11:09.608578] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.122 [2024-06-11 06:11:09.608604] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.122 [2024-06-11 06:11:09.608666] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.122 [2024-06-11 06:11:09.608745] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:39.122 [2024-06-11 06:11:09.608808] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.122 06:11:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.381 06:11:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:39.381 "name": "Existed_Raid", 00:18:39.381 "uuid": "d03cfbb1-bae3-4aff-bf3e-3efc57f96585", 00:18:39.381 "strip_size_kb": 64, 00:18:39.381 "state": "configuring", 00:18:39.381 "raid_level": "raid0", 00:18:39.381 "superblock": true, 00:18:39.381 "num_base_bdevs": 4, 00:18:39.381 "num_base_bdevs_discovered": 0, 00:18:39.381 "num_base_bdevs_operational": 4, 00:18:39.381 "base_bdevs_list": [ 00:18:39.381 { 00:18:39.381 "name": "BaseBdev1", 00:18:39.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.381 "is_configured": false, 00:18:39.381 "data_offset": 0, 00:18:39.381 "data_size": 0 00:18:39.381 }, 00:18:39.381 { 00:18:39.381 "name": "BaseBdev2", 00:18:39.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.381 "is_configured": false, 00:18:39.381 "data_offset": 0, 00:18:39.381 "data_size": 0 00:18:39.381 }, 00:18:39.381 { 00:18:39.381 "name": "BaseBdev3", 00:18:39.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.381 "is_configured": false, 00:18:39.381 "data_offset": 0, 00:18:39.381 "data_size": 0 00:18:39.381 }, 00:18:39.381 { 00:18:39.381 "name": "BaseBdev4", 00:18:39.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.381 "is_configured": false, 00:18:39.381 "data_offset": 0, 00:18:39.381 "data_size": 0 00:18:39.381 } 00:18:39.381 ] 00:18:39.381 }' 00:18:39.381 06:11:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:39.381 06:11:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.949 06:11:10 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:40.208 [2024-06-11 06:11:10.652179] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.208 [2024-06-11 06:11:10.652409] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:40.208 06:11:10 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:40.208 [2024-06-11 06:11:10.832304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.208 [2024-06-11 06:11:10.832538] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.208 [2024-06-11 06:11:10.832651] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.208 [2024-06-11 06:11:10.832714] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.208 [2024-06-11 06:11:10.832785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:40.208 [2024-06-11 06:11:10.832878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:40.208 [2024-06-11 06:11:10.833004] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:40.208 [2024-06-11 06:11:10.833059] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:40.208 06:11:10 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.776 [2024-06-11 06:11:11.116668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.776 BaseBdev1 00:18:40.776 06:11:11 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:40.777 06:11:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:40.777 06:11:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:40.777 06:11:11 -- common/autotest_common.sh@889 -- # local i 00:18:40.777 06:11:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:40.777 06:11:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:40.777 06:11:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:40.777 06:11:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:41.036 [ 00:18:41.036 { 00:18:41.036 "name": "BaseBdev1", 00:18:41.036 "aliases": [ 00:18:41.036 "0d42a2b5-4e0d-456e-b74c-2cc1b3f4c47a" 00:18:41.036 ], 00:18:41.036 "product_name": "Malloc disk", 00:18:41.036 "block_size": 512, 00:18:41.036 "num_blocks": 65536, 00:18:41.036 "uuid": "0d42a2b5-4e0d-456e-b74c-2cc1b3f4c47a", 00:18:41.036 "assigned_rate_limits": { 00:18:41.036 "rw_ios_per_sec": 0, 00:18:41.036 "rw_mbytes_per_sec": 0, 00:18:41.036 "r_mbytes_per_sec": 0, 00:18:41.036 "w_mbytes_per_sec": 0 00:18:41.036 }, 00:18:41.036 "claimed": true, 00:18:41.036 "claim_type": "exclusive_write", 00:18:41.036 "zoned": false, 00:18:41.036 "supported_io_types": { 00:18:41.036 "read": true, 00:18:41.036 "write": true, 00:18:41.036 "unmap": true, 00:18:41.036 "write_zeroes": true, 00:18:41.036 "flush": true, 00:18:41.036 "reset": true, 00:18:41.036 "compare": false, 00:18:41.036 "compare_and_write": false, 00:18:41.036 "abort": true, 00:18:41.036 "nvme_admin": false, 00:18:41.036 "nvme_io": false 00:18:41.036 }, 00:18:41.036 "memory_domains": [ 00:18:41.036 { 00:18:41.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.036 "dma_device_type": 2 00:18:41.036 } 00:18:41.036 ], 00:18:41.036 "driver_specific": {} 00:18:41.036 } 00:18:41.036 ] 00:18:41.036 06:11:11 -- common/autotest_common.sh@895 -- # return 0 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.036 06:11:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.294 06:11:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.294 "name": "Existed_Raid", 00:18:41.294 "uuid": "a2d2c390-77ac-470f-a509-f0e702a73fd1", 00:18:41.294 "strip_size_kb": 64, 00:18:41.294 "state": "configuring", 00:18:41.294 "raid_level": "raid0", 00:18:41.294 "superblock": true, 00:18:41.294 "num_base_bdevs": 4, 00:18:41.294 "num_base_bdevs_discovered": 1, 00:18:41.294 "num_base_bdevs_operational": 4, 00:18:41.294 "base_bdevs_list": [ 00:18:41.294 { 00:18:41.294 "name": "BaseBdev1", 00:18:41.294 "uuid": "0d42a2b5-4e0d-456e-b74c-2cc1b3f4c47a", 00:18:41.294 "is_configured": true, 00:18:41.294 "data_offset": 2048, 00:18:41.294 "data_size": 63488 00:18:41.294 }, 00:18:41.294 { 00:18:41.294 "name": "BaseBdev2", 00:18:41.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.294 "is_configured": false, 00:18:41.294 "data_offset": 0, 00:18:41.294 "data_size": 0 00:18:41.294 }, 00:18:41.294 { 00:18:41.294 "name": "BaseBdev3", 00:18:41.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.294 "is_configured": false, 00:18:41.294 "data_offset": 0, 00:18:41.294 "data_size": 0 00:18:41.294 }, 00:18:41.294 { 00:18:41.294 "name": "BaseBdev4", 00:18:41.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.294 "is_configured": false, 00:18:41.294 "data_offset": 0, 00:18:41.294 "data_size": 0 00:18:41.294 } 00:18:41.294 ] 00:18:41.294 }' 00:18:41.294 06:11:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.294 06:11:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.861 06:11:12 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:42.120 [2024-06-11 06:11:12.581038] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:42.120 [2024-06-11 06:11:12.581252] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:42.120 06:11:12 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:42.120 06:11:12 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:42.379 06:11:12 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:42.639 BaseBdev1 00:18:42.639 06:11:13 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:42.639 06:11:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:42.639 06:11:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:42.639 06:11:13 -- common/autotest_common.sh@889 -- # local i 00:18:42.639 06:11:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:42.639 06:11:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:42.639 06:11:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.898 06:11:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:42.898 [ 00:18:42.898 { 00:18:42.898 "name": "BaseBdev1", 00:18:42.898 "aliases": [ 00:18:42.898 "867d89ff-1dda-4cb8-844f-967598e3d0ae" 00:18:42.898 ], 00:18:42.898 "product_name": "Malloc disk", 00:18:42.898 "block_size": 512, 00:18:42.898 "num_blocks": 65536, 00:18:42.898 "uuid": "867d89ff-1dda-4cb8-844f-967598e3d0ae", 00:18:42.898 "assigned_rate_limits": { 00:18:42.898 "rw_ios_per_sec": 0, 00:18:42.898 "rw_mbytes_per_sec": 0, 00:18:42.898 "r_mbytes_per_sec": 0, 00:18:42.898 "w_mbytes_per_sec": 0 00:18:42.898 }, 00:18:42.898 "claimed": false, 00:18:42.898 "zoned": false, 00:18:42.898 "supported_io_types": { 00:18:42.898 "read": true, 00:18:42.898 "write": true, 00:18:42.898 "unmap": true, 00:18:42.898 "write_zeroes": true, 00:18:42.898 "flush": true, 00:18:42.898 "reset": true, 00:18:42.898 "compare": false, 00:18:42.898 "compare_and_write": false, 00:18:42.898 "abort": true, 00:18:42.898 "nvme_admin": false, 00:18:42.898 "nvme_io": false 00:18:42.898 }, 00:18:42.898 "memory_domains": [ 00:18:42.898 { 00:18:42.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.898 "dma_device_type": 2 00:18:42.898 } 00:18:42.898 ], 00:18:42.898 "driver_specific": {} 00:18:42.898 } 00:18:42.898 ] 00:18:42.898 06:11:13 -- common/autotest_common.sh@895 -- # return 0 00:18:42.898 06:11:13 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:43.158 [2024-06-11 06:11:13.668545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:43.158 [2024-06-11 06:11:13.671015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.158 [2024-06-11 06:11:13.671221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.158 [2024-06-11 06:11:13.671372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:43.158 [2024-06-11 06:11:13.671437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:43.158 [2024-06-11 06:11:13.671515] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:43.158 [2024-06-11 06:11:13.671563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.158 06:11:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.417 06:11:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.417 "name": "Existed_Raid", 00:18:43.417 "uuid": "1fb54457-5936-429b-ad4d-3717d9c4a8da", 00:18:43.417 "strip_size_kb": 64, 00:18:43.417 "state": "configuring", 00:18:43.417 "raid_level": "raid0", 00:18:43.417 "superblock": true, 00:18:43.417 "num_base_bdevs": 4, 00:18:43.417 "num_base_bdevs_discovered": 1, 00:18:43.417 "num_base_bdevs_operational": 4, 00:18:43.417 "base_bdevs_list": [ 00:18:43.417 { 00:18:43.417 "name": "BaseBdev1", 00:18:43.417 "uuid": "867d89ff-1dda-4cb8-844f-967598e3d0ae", 00:18:43.417 "is_configured": true, 00:18:43.417 "data_offset": 2048, 00:18:43.417 "data_size": 63488 00:18:43.417 }, 00:18:43.417 { 00:18:43.417 "name": "BaseBdev2", 00:18:43.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.417 "is_configured": false, 00:18:43.417 "data_offset": 0, 00:18:43.417 "data_size": 0 00:18:43.417 }, 00:18:43.417 { 00:18:43.417 "name": "BaseBdev3", 00:18:43.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.417 "is_configured": false, 00:18:43.417 "data_offset": 0, 00:18:43.417 "data_size": 0 00:18:43.417 }, 00:18:43.417 { 00:18:43.417 "name": "BaseBdev4", 00:18:43.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.417 "is_configured": false, 00:18:43.417 "data_offset": 0, 00:18:43.417 "data_size": 0 00:18:43.417 } 00:18:43.417 ] 00:18:43.417 }' 00:18:43.417 06:11:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.417 06:11:13 -- common/autotest_common.sh@10 -- # set +x 00:18:43.985 06:11:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:44.244 [2024-06-11 06:11:14.721124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:44.244 BaseBdev2 00:18:44.244 06:11:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:44.244 06:11:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:44.244 06:11:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:44.244 06:11:14 -- common/autotest_common.sh@889 -- # local i 00:18:44.244 06:11:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:44.244 06:11:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:44.244 06:11:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.503 06:11:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:44.763 [ 00:18:44.763 { 00:18:44.763 "name": "BaseBdev2", 00:18:44.763 "aliases": [ 00:18:44.763 "c4dcdbc9-0b29-4564-bd96-6ad444ca9002" 00:18:44.763 ], 00:18:44.763 "product_name": "Malloc disk", 00:18:44.763 "block_size": 512, 00:18:44.763 "num_blocks": 65536, 00:18:44.763 "uuid": "c4dcdbc9-0b29-4564-bd96-6ad444ca9002", 00:18:44.763 "assigned_rate_limits": { 00:18:44.763 "rw_ios_per_sec": 0, 00:18:44.763 "rw_mbytes_per_sec": 0, 00:18:44.763 "r_mbytes_per_sec": 0, 00:18:44.763 "w_mbytes_per_sec": 0 00:18:44.763 }, 00:18:44.763 "claimed": true, 00:18:44.763 "claim_type": "exclusive_write", 00:18:44.763 "zoned": false, 00:18:44.763 "supported_io_types": { 00:18:44.763 "read": true, 00:18:44.763 "write": true, 00:18:44.763 "unmap": true, 00:18:44.763 "write_zeroes": true, 00:18:44.763 "flush": true, 00:18:44.763 "reset": true, 00:18:44.763 "compare": false, 00:18:44.763 "compare_and_write": false, 00:18:44.763 "abort": true, 00:18:44.763 "nvme_admin": false, 00:18:44.763 "nvme_io": false 00:18:44.763 }, 00:18:44.763 "memory_domains": [ 00:18:44.763 { 00:18:44.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.763 "dma_device_type": 2 00:18:44.763 } 00:18:44.763 ], 00:18:44.763 "driver_specific": {} 00:18:44.763 } 00:18:44.763 ] 00:18:44.763 06:11:15 -- common/autotest_common.sh@895 -- # return 0 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.763 06:11:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.023 06:11:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.023 "name": "Existed_Raid", 00:18:45.023 "uuid": "1fb54457-5936-429b-ad4d-3717d9c4a8da", 00:18:45.023 "strip_size_kb": 64, 00:18:45.023 "state": "configuring", 00:18:45.023 "raid_level": "raid0", 00:18:45.023 "superblock": true, 00:18:45.023 "num_base_bdevs": 4, 00:18:45.023 "num_base_bdevs_discovered": 2, 00:18:45.023 "num_base_bdevs_operational": 4, 00:18:45.023 "base_bdevs_list": [ 00:18:45.023 { 00:18:45.023 "name": "BaseBdev1", 00:18:45.023 "uuid": "867d89ff-1dda-4cb8-844f-967598e3d0ae", 00:18:45.023 "is_configured": true, 00:18:45.023 "data_offset": 2048, 00:18:45.023 "data_size": 63488 00:18:45.023 }, 00:18:45.023 { 00:18:45.023 "name": "BaseBdev2", 00:18:45.023 "uuid": "c4dcdbc9-0b29-4564-bd96-6ad444ca9002", 00:18:45.023 "is_configured": true, 00:18:45.023 "data_offset": 2048, 00:18:45.023 "data_size": 63488 00:18:45.023 }, 00:18:45.023 { 00:18:45.023 "name": "BaseBdev3", 00:18:45.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.023 "is_configured": false, 00:18:45.023 "data_offset": 0, 00:18:45.023 "data_size": 0 00:18:45.023 }, 00:18:45.023 { 00:18:45.023 "name": "BaseBdev4", 00:18:45.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.023 "is_configured": false, 00:18:45.023 "data_offset": 0, 00:18:45.023 "data_size": 0 00:18:45.023 } 00:18:45.023 ] 00:18:45.023 }' 00:18:45.023 06:11:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.023 06:11:15 -- common/autotest_common.sh@10 -- # set +x 00:18:45.592 06:11:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:45.850 [2024-06-11 06:11:16.248198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.850 BaseBdev3 00:18:45.851 06:11:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:45.851 06:11:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:45.851 06:11:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:45.851 06:11:16 -- common/autotest_common.sh@889 -- # local i 00:18:45.851 06:11:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:45.851 06:11:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:45.851 06:11:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:46.109 06:11:16 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:46.109 [ 00:18:46.109 { 00:18:46.109 "name": "BaseBdev3", 00:18:46.109 "aliases": [ 00:18:46.109 "527cd78e-0c90-4827-b39d-bab061129e4e" 00:18:46.109 ], 00:18:46.109 "product_name": "Malloc disk", 00:18:46.109 "block_size": 512, 00:18:46.109 "num_blocks": 65536, 00:18:46.109 "uuid": "527cd78e-0c90-4827-b39d-bab061129e4e", 00:18:46.109 "assigned_rate_limits": { 00:18:46.109 "rw_ios_per_sec": 0, 00:18:46.109 "rw_mbytes_per_sec": 0, 00:18:46.110 "r_mbytes_per_sec": 0, 00:18:46.110 "w_mbytes_per_sec": 0 00:18:46.110 }, 00:18:46.110 "claimed": true, 00:18:46.110 "claim_type": "exclusive_write", 00:18:46.110 "zoned": false, 00:18:46.110 "supported_io_types": { 00:18:46.110 "read": true, 00:18:46.110 "write": true, 00:18:46.110 "unmap": true, 00:18:46.110 "write_zeroes": true, 00:18:46.110 "flush": true, 00:18:46.110 "reset": true, 00:18:46.110 "compare": false, 00:18:46.110 "compare_and_write": false, 00:18:46.110 "abort": true, 00:18:46.110 "nvme_admin": false, 00:18:46.110 "nvme_io": false 00:18:46.110 }, 00:18:46.110 "memory_domains": [ 00:18:46.110 { 00:18:46.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.110 "dma_device_type": 2 00:18:46.110 } 00:18:46.110 ], 00:18:46.110 "driver_specific": {} 00:18:46.110 } 00:18:46.110 ] 00:18:46.110 06:11:16 -- common/autotest_common.sh@895 -- # return 0 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.110 06:11:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.369 06:11:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.369 "name": "Existed_Raid", 00:18:46.369 "uuid": "1fb54457-5936-429b-ad4d-3717d9c4a8da", 00:18:46.369 "strip_size_kb": 64, 00:18:46.369 "state": "configuring", 00:18:46.369 "raid_level": "raid0", 00:18:46.369 "superblock": true, 00:18:46.369 "num_base_bdevs": 4, 00:18:46.369 "num_base_bdevs_discovered": 3, 00:18:46.369 "num_base_bdevs_operational": 4, 00:18:46.369 "base_bdevs_list": [ 00:18:46.369 { 00:18:46.369 "name": "BaseBdev1", 00:18:46.369 "uuid": "867d89ff-1dda-4cb8-844f-967598e3d0ae", 00:18:46.369 "is_configured": true, 00:18:46.369 "data_offset": 2048, 00:18:46.369 "data_size": 63488 00:18:46.369 }, 00:18:46.369 { 00:18:46.369 "name": "BaseBdev2", 00:18:46.369 "uuid": "c4dcdbc9-0b29-4564-bd96-6ad444ca9002", 00:18:46.369 "is_configured": true, 00:18:46.369 "data_offset": 2048, 00:18:46.369 "data_size": 63488 00:18:46.369 }, 00:18:46.369 { 00:18:46.369 "name": "BaseBdev3", 00:18:46.369 "uuid": "527cd78e-0c90-4827-b39d-bab061129e4e", 00:18:46.369 "is_configured": true, 00:18:46.369 "data_offset": 2048, 00:18:46.369 "data_size": 63488 00:18:46.369 }, 00:18:46.369 { 00:18:46.369 "name": "BaseBdev4", 00:18:46.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.369 "is_configured": false, 00:18:46.369 "data_offset": 0, 00:18:46.369 "data_size": 0 00:18:46.369 } 00:18:46.369 ] 00:18:46.369 }' 00:18:46.369 06:11:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.369 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.936 06:11:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:47.196 [2024-06-11 06:11:17.760544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:47.196 [2024-06-11 06:11:17.761098] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:47.196 [2024-06-11 06:11:17.761216] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:47.196 [2024-06-11 06:11:17.761405] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:47.196 [2024-06-11 06:11:17.761789] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:47.196 [2024-06-11 06:11:17.761907] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:47.196 [2024-06-11 06:11:17.762132] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.196 BaseBdev4 00:18:47.196 06:11:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:47.196 06:11:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:47.196 06:11:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:47.196 06:11:17 -- common/autotest_common.sh@889 -- # local i 00:18:47.196 06:11:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:47.196 06:11:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:47.196 06:11:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:47.455 06:11:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:47.714 [ 00:18:47.714 { 00:18:47.714 "name": "BaseBdev4", 00:18:47.714 "aliases": [ 00:18:47.714 "6c10796c-9da1-4593-b0fc-d7bf2d5bc0f4" 00:18:47.714 ], 00:18:47.714 "product_name": "Malloc disk", 00:18:47.714 "block_size": 512, 00:18:47.714 "num_blocks": 65536, 00:18:47.714 "uuid": "6c10796c-9da1-4593-b0fc-d7bf2d5bc0f4", 00:18:47.714 "assigned_rate_limits": { 00:18:47.714 "rw_ios_per_sec": 0, 00:18:47.714 "rw_mbytes_per_sec": 0, 00:18:47.714 "r_mbytes_per_sec": 0, 00:18:47.714 "w_mbytes_per_sec": 0 00:18:47.714 }, 00:18:47.714 "claimed": true, 00:18:47.714 "claim_type": "exclusive_write", 00:18:47.714 "zoned": false, 00:18:47.714 "supported_io_types": { 00:18:47.714 "read": true, 00:18:47.714 "write": true, 00:18:47.714 "unmap": true, 00:18:47.714 "write_zeroes": true, 00:18:47.714 "flush": true, 00:18:47.714 "reset": true, 00:18:47.714 "compare": false, 00:18:47.714 "compare_and_write": false, 00:18:47.714 "abort": true, 00:18:47.714 "nvme_admin": false, 00:18:47.714 "nvme_io": false 00:18:47.714 }, 00:18:47.714 "memory_domains": [ 00:18:47.714 { 00:18:47.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.714 "dma_device_type": 2 00:18:47.714 } 00:18:47.714 ], 00:18:47.714 "driver_specific": {} 00:18:47.714 } 00:18:47.714 ] 00:18:47.714 06:11:18 -- common/autotest_common.sh@895 -- # return 0 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.714 "name": "Existed_Raid", 00:18:47.714 "uuid": "1fb54457-5936-429b-ad4d-3717d9c4a8da", 00:18:47.714 "strip_size_kb": 64, 00:18:47.714 "state": "online", 00:18:47.714 "raid_level": "raid0", 00:18:47.714 "superblock": true, 00:18:47.714 "num_base_bdevs": 4, 00:18:47.714 "num_base_bdevs_discovered": 4, 00:18:47.714 "num_base_bdevs_operational": 4, 00:18:47.714 "base_bdevs_list": [ 00:18:47.714 { 00:18:47.714 "name": "BaseBdev1", 00:18:47.714 "uuid": "867d89ff-1dda-4cb8-844f-967598e3d0ae", 00:18:47.714 "is_configured": true, 00:18:47.714 "data_offset": 2048, 00:18:47.714 "data_size": 63488 00:18:47.714 }, 00:18:47.714 { 00:18:47.714 "name": "BaseBdev2", 00:18:47.714 "uuid": "c4dcdbc9-0b29-4564-bd96-6ad444ca9002", 00:18:47.714 "is_configured": true, 00:18:47.714 "data_offset": 2048, 00:18:47.714 "data_size": 63488 00:18:47.714 }, 00:18:47.714 { 00:18:47.714 "name": "BaseBdev3", 00:18:47.714 "uuid": "527cd78e-0c90-4827-b39d-bab061129e4e", 00:18:47.714 "is_configured": true, 00:18:47.714 "data_offset": 2048, 00:18:47.714 "data_size": 63488 00:18:47.714 }, 00:18:47.714 { 00:18:47.714 "name": "BaseBdev4", 00:18:47.714 "uuid": "6c10796c-9da1-4593-b0fc-d7bf2d5bc0f4", 00:18:47.714 "is_configured": true, 00:18:47.714 "data_offset": 2048, 00:18:47.714 "data_size": 63488 00:18:47.714 } 00:18:47.714 ] 00:18:47.714 }' 00:18:47.714 06:11:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.714 06:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:48.652 [2024-06-11 06:11:19.172928] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:48.652 [2024-06-11 06:11:19.173099] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.652 [2024-06-11 06:11:19.173317] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:48.652 06:11:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.911 "name": "Existed_Raid", 00:18:48.911 "uuid": "1fb54457-5936-429b-ad4d-3717d9c4a8da", 00:18:48.911 "strip_size_kb": 64, 00:18:48.911 "state": "offline", 00:18:48.911 "raid_level": "raid0", 00:18:48.911 "superblock": true, 00:18:48.911 "num_base_bdevs": 4, 00:18:48.911 "num_base_bdevs_discovered": 3, 00:18:48.911 "num_base_bdevs_operational": 3, 00:18:48.911 "base_bdevs_list": [ 00:18:48.911 { 00:18:48.911 "name": null, 00:18:48.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.911 "is_configured": false, 00:18:48.911 "data_offset": 2048, 00:18:48.911 "data_size": 63488 00:18:48.911 }, 00:18:48.911 { 00:18:48.911 "name": "BaseBdev2", 00:18:48.911 "uuid": "c4dcdbc9-0b29-4564-bd96-6ad444ca9002", 00:18:48.911 "is_configured": true, 00:18:48.911 "data_offset": 2048, 00:18:48.911 "data_size": 63488 00:18:48.911 }, 00:18:48.911 { 00:18:48.911 "name": "BaseBdev3", 00:18:48.911 "uuid": "527cd78e-0c90-4827-b39d-bab061129e4e", 00:18:48.911 "is_configured": true, 00:18:48.911 "data_offset": 2048, 00:18:48.911 "data_size": 63488 00:18:48.911 }, 00:18:48.911 { 00:18:48.911 "name": "BaseBdev4", 00:18:48.911 "uuid": "6c10796c-9da1-4593-b0fc-d7bf2d5bc0f4", 00:18:48.911 "is_configured": true, 00:18:48.911 "data_offset": 2048, 00:18:48.911 "data_size": 63488 00:18:48.911 } 00:18:48.911 ] 00:18:48.911 }' 00:18:48.911 06:11:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.911 06:11:19 -- common/autotest_common.sh@10 -- # set +x 00:18:49.479 06:11:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:49.479 06:11:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:49.479 06:11:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.479 06:11:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:49.738 06:11:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:49.738 06:11:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:49.738 06:11:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:49.997 [2024-06-11 06:11:20.488722] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:49.997 06:11:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:49.997 06:11:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:49.997 06:11:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.997 06:11:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:50.255 06:11:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:50.255 06:11:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.255 06:11:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:50.514 [2024-06-11 06:11:21.011526] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:50.514 06:11:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:50.514 06:11:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:50.514 06:11:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.514 06:11:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:50.772 06:11:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:50.772 06:11:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:50.772 06:11:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:51.049 [2024-06-11 06:11:21.616894] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:51.049 [2024-06-11 06:11:21.617176] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:51.321 06:11:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:51.321 06:11:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:51.321 06:11:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.321 06:11:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:51.580 06:11:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:51.580 06:11:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:51.580 06:11:21 -- bdev/bdev_raid.sh@287 -- # killprocess 119518 00:18:51.580 06:11:21 -- common/autotest_common.sh@926 -- # '[' -z 119518 ']' 00:18:51.580 06:11:21 -- common/autotest_common.sh@930 -- # kill -0 119518 00:18:51.580 06:11:21 -- common/autotest_common.sh@931 -- # uname 00:18:51.580 06:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:51.580 06:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119518 00:18:51.580 06:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:51.580 06:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:51.580 06:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119518' 00:18:51.580 killing process with pid 119518 00:18:51.580 06:11:22 -- common/autotest_common.sh@945 -- # kill 119518 00:18:51.580 [2024-06-11 06:11:22.005350] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.580 06:11:22 -- common/autotest_common.sh@950 -- # wait 119518 00:18:51.580 [2024-06-11 06:11:22.005660] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.958 ************************************ 00:18:52.958 END TEST raid_state_function_test_sb 00:18:52.958 ************************************ 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:52.958 00:18:52.958 real 0m14.999s 00:18:52.958 user 0m25.295s 00:18:52.958 sys 0m2.520s 00:18:52.958 06:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.958 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:52.958 06:11:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:52.958 06:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:52.958 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.958 ************************************ 00:18:52.958 START TEST raid_superblock_test 00:18:52.958 ************************************ 00:18:52.958 06:11:23 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@357 -- # raid_pid=119973 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119973 /var/tmp/spdk-raid.sock 00:18:52.958 06:11:23 -- common/autotest_common.sh@819 -- # '[' -z 119973 ']' 00:18:52.958 06:11:23 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:52.958 06:11:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:52.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:52.958 06:11:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:52.958 06:11:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:52.958 06:11:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:52.958 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:18:52.958 [2024-06-11 06:11:23.529246] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:18:52.958 [2024-06-11 06:11:23.530273] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119973 ] 00:18:53.218 [2024-06-11 06:11:23.713639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.477 [2024-06-11 06:11:23.944113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.736 [2024-06-11 06:11:24.190170] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.995 06:11:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:53.995 06:11:24 -- common/autotest_common.sh@852 -- # return 0 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.995 06:11:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:54.253 malloc1 00:18:54.253 06:11:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:54.513 [2024-06-11 06:11:24.903326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:54.513 [2024-06-11 06:11:24.903664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.513 [2024-06-11 06:11:24.903749] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:54.513 [2024-06-11 06:11:24.903870] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.513 [2024-06-11 06:11:24.906617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.513 [2024-06-11 06:11:24.906774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:54.513 pt1 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.513 06:11:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:54.772 malloc2 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:54.772 [2024-06-11 06:11:25.375978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:54.772 [2024-06-11 06:11:25.376238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.772 [2024-06-11 06:11:25.376318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:54.772 [2024-06-11 06:11:25.376450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.772 [2024-06-11 06:11:25.379092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.772 [2024-06-11 06:11:25.379266] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:54.772 pt2 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.772 06:11:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:55.031 malloc3 00:18:55.031 06:11:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.290 [2024-06-11 06:11:25.841178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.290 [2024-06-11 06:11:25.841426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.290 [2024-06-11 06:11:25.841510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:55.290 [2024-06-11 06:11:25.841625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.290 [2024-06-11 06:11:25.844267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.290 [2024-06-11 06:11:25.844436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.290 pt3 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:55.290 06:11:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:55.549 malloc4 00:18:55.549 06:11:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:55.809 [2024-06-11 06:11:26.236551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:55.809 [2024-06-11 06:11:26.236830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.809 [2024-06-11 06:11:26.236905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:55.809 [2024-06-11 06:11:26.237052] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.809 [2024-06-11 06:11:26.239701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.809 [2024-06-11 06:11:26.239864] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:55.809 pt4 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:55.809 [2024-06-11 06:11:26.412830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.809 [2024-06-11 06:11:26.415231] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.809 [2024-06-11 06:11:26.415428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:55.809 [2024-06-11 06:11:26.415530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:55.809 [2024-06-11 06:11:26.415829] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:55.809 [2024-06-11 06:11:26.415955] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:55.809 [2024-06-11 06:11:26.416123] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:55.809 [2024-06-11 06:11:26.416505] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:55.809 [2024-06-11 06:11:26.416601] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:55.809 [2024-06-11 06:11:26.416870] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.809 06:11:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.067 06:11:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.067 "name": "raid_bdev1", 00:18:56.067 "uuid": "51862a31-67c4-4555-b90a-5120644bc109", 00:18:56.067 "strip_size_kb": 64, 00:18:56.067 "state": "online", 00:18:56.067 "raid_level": "raid0", 00:18:56.067 "superblock": true, 00:18:56.067 "num_base_bdevs": 4, 00:18:56.067 "num_base_bdevs_discovered": 4, 00:18:56.067 "num_base_bdevs_operational": 4, 00:18:56.067 "base_bdevs_list": [ 00:18:56.067 { 00:18:56.067 "name": "pt1", 00:18:56.067 "uuid": "b8cbfe8a-6973-527b-9d18-16843dd0b9a8", 00:18:56.067 "is_configured": true, 00:18:56.067 "data_offset": 2048, 00:18:56.067 "data_size": 63488 00:18:56.067 }, 00:18:56.067 { 00:18:56.067 "name": "pt2", 00:18:56.067 "uuid": "4c11855b-998c-51af-9536-121d1035a81c", 00:18:56.068 "is_configured": true, 00:18:56.068 "data_offset": 2048, 00:18:56.068 "data_size": 63488 00:18:56.068 }, 00:18:56.068 { 00:18:56.068 "name": "pt3", 00:18:56.068 "uuid": "4bd7b952-0c97-597b-8fe9-f6775d34f2d2", 00:18:56.068 "is_configured": true, 00:18:56.068 "data_offset": 2048, 00:18:56.068 "data_size": 63488 00:18:56.068 }, 00:18:56.068 { 00:18:56.068 "name": "pt4", 00:18:56.068 "uuid": "35d84ff6-9f00-582c-9d66-ed01e0770b4c", 00:18:56.068 "is_configured": true, 00:18:56.068 "data_offset": 2048, 00:18:56.068 "data_size": 63488 00:18:56.068 } 00:18:56.068 ] 00:18:56.068 }' 00:18:56.068 06:11:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.068 06:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.636 06:11:27 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:56.636 06:11:27 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:56.895 [2024-06-11 06:11:27.453285] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.895 06:11:27 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=51862a31-67c4-4555-b90a-5120644bc109 00:18:56.895 06:11:27 -- bdev/bdev_raid.sh@380 -- # '[' -z 51862a31-67c4-4555-b90a-5120644bc109 ']' 00:18:56.895 06:11:27 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:57.153 [2024-06-11 06:11:27.625085] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:57.153 [2024-06-11 06:11:27.625249] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:57.153 [2024-06-11 06:11:27.625497] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:57.153 [2024-06-11 06:11:27.625670] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:57.153 [2024-06-11 06:11:27.625767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:57.153 06:11:27 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.153 06:11:27 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:57.412 06:11:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:57.412 06:11:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:57.412 06:11:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:57.412 06:11:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:57.412 06:11:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:57.412 06:11:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:57.671 06:11:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:57.671 06:11:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:57.929 06:11:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:57.929 06:11:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:58.188 06:11:28 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:58.188 06:11:28 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:58.188 06:11:28 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:58.188 06:11:28 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:58.188 06:11:28 -- common/autotest_common.sh@640 -- # local es=0 00:18:58.188 06:11:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:58.188 06:11:28 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.188 06:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:58.188 06:11:28 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.188 06:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:58.188 06:11:28 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.188 06:11:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:58.188 06:11:28 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:58.188 06:11:28 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:58.188 06:11:28 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:58.447 [2024-06-11 06:11:28.941280] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:58.447 [2024-06-11 06:11:28.943722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:58.447 [2024-06-11 06:11:28.943909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:58.447 [2024-06-11 06:11:28.943980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:58.447 [2024-06-11 06:11:28.944115] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:58.447 [2024-06-11 06:11:28.944234] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:58.447 [2024-06-11 06:11:28.944460] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:58.447 [2024-06-11 06:11:28.944549] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:58.447 [2024-06-11 06:11:28.944780] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.447 [2024-06-11 06:11:28.944832] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:18:58.447 request: 00:18:58.447 { 00:18:58.447 "name": "raid_bdev1", 00:18:58.447 "raid_level": "raid0", 00:18:58.447 "base_bdevs": [ 00:18:58.447 "malloc1", 00:18:58.447 "malloc2", 00:18:58.447 "malloc3", 00:18:58.447 "malloc4" 00:18:58.447 ], 00:18:58.447 "superblock": false, 00:18:58.447 "strip_size_kb": 64, 00:18:58.447 "method": "bdev_raid_create", 00:18:58.447 "req_id": 1 00:18:58.447 } 00:18:58.447 Got JSON-RPC error response 00:18:58.447 response: 00:18:58.447 { 00:18:58.447 "code": -17, 00:18:58.447 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:58.447 } 00:18:58.447 06:11:28 -- common/autotest_common.sh@643 -- # es=1 00:18:58.447 06:11:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:58.447 06:11:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:58.447 06:11:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:58.447 06:11:28 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.447 06:11:28 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:58.706 06:11:29 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:58.706 06:11:29 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:58.706 06:11:29 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:58.964 [2024-06-11 06:11:29.353337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:58.964 [2024-06-11 06:11:29.353563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:58.964 [2024-06-11 06:11:29.353630] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:58.964 [2024-06-11 06:11:29.353742] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:58.964 [2024-06-11 06:11:29.356485] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:58.964 [2024-06-11 06:11:29.356673] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:58.964 [2024-06-11 06:11:29.356899] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:58.964 [2024-06-11 06:11:29.357093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:58.964 pt1 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.964 06:11:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.224 06:11:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.224 "name": "raid_bdev1", 00:18:59.224 "uuid": "51862a31-67c4-4555-b90a-5120644bc109", 00:18:59.224 "strip_size_kb": 64, 00:18:59.224 "state": "configuring", 00:18:59.224 "raid_level": "raid0", 00:18:59.224 "superblock": true, 00:18:59.224 "num_base_bdevs": 4, 00:18:59.224 "num_base_bdevs_discovered": 1, 00:18:59.224 "num_base_bdevs_operational": 4, 00:18:59.224 "base_bdevs_list": [ 00:18:59.224 { 00:18:59.224 "name": "pt1", 00:18:59.224 "uuid": "b8cbfe8a-6973-527b-9d18-16843dd0b9a8", 00:18:59.224 "is_configured": true, 00:18:59.224 "data_offset": 2048, 00:18:59.224 "data_size": 63488 00:18:59.224 }, 00:18:59.224 { 00:18:59.224 "name": null, 00:18:59.224 "uuid": "4c11855b-998c-51af-9536-121d1035a81c", 00:18:59.224 "is_configured": false, 00:18:59.224 "data_offset": 2048, 00:18:59.224 "data_size": 63488 00:18:59.224 }, 00:18:59.224 { 00:18:59.224 "name": null, 00:18:59.224 "uuid": "4bd7b952-0c97-597b-8fe9-f6775d34f2d2", 00:18:59.224 "is_configured": false, 00:18:59.224 "data_offset": 2048, 00:18:59.224 "data_size": 63488 00:18:59.224 }, 00:18:59.224 { 00:18:59.224 "name": null, 00:18:59.224 "uuid": "35d84ff6-9f00-582c-9d66-ed01e0770b4c", 00:18:59.224 "is_configured": false, 00:18:59.224 "data_offset": 2048, 00:18:59.224 "data_size": 63488 00:18:59.224 } 00:18:59.224 ] 00:18:59.224 }' 00:18:59.224 06:11:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.224 06:11:29 -- common/autotest_common.sh@10 -- # set +x 00:18:59.483 06:11:30 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:59.483 06:11:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:59.742 [2024-06-11 06:11:30.233521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:59.742 [2024-06-11 06:11:30.233835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:59.742 [2024-06-11 06:11:30.233916] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:59.742 [2024-06-11 06:11:30.234011] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:59.742 [2024-06-11 06:11:30.234651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:59.742 [2024-06-11 06:11:30.234808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:59.742 [2024-06-11 06:11:30.235031] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:59.742 [2024-06-11 06:11:30.235122] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:59.742 pt2 00:18:59.742 06:11:30 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:00.001 [2024-06-11 06:11:30.465605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.001 06:11:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.260 06:11:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.260 "name": "raid_bdev1", 00:19:00.260 "uuid": "51862a31-67c4-4555-b90a-5120644bc109", 00:19:00.260 "strip_size_kb": 64, 00:19:00.260 "state": "configuring", 00:19:00.260 "raid_level": "raid0", 00:19:00.260 "superblock": true, 00:19:00.260 "num_base_bdevs": 4, 00:19:00.260 "num_base_bdevs_discovered": 1, 00:19:00.260 "num_base_bdevs_operational": 4, 00:19:00.260 "base_bdevs_list": [ 00:19:00.260 { 00:19:00.260 "name": "pt1", 00:19:00.260 "uuid": "b8cbfe8a-6973-527b-9d18-16843dd0b9a8", 00:19:00.260 "is_configured": true, 00:19:00.260 "data_offset": 2048, 00:19:00.260 "data_size": 63488 00:19:00.260 }, 00:19:00.260 { 00:19:00.260 "name": null, 00:19:00.260 "uuid": "4c11855b-998c-51af-9536-121d1035a81c", 00:19:00.260 "is_configured": false, 00:19:00.260 "data_offset": 2048, 00:19:00.260 "data_size": 63488 00:19:00.260 }, 00:19:00.260 { 00:19:00.260 "name": null, 00:19:00.260 "uuid": "4bd7b952-0c97-597b-8fe9-f6775d34f2d2", 00:19:00.260 "is_configured": false, 00:19:00.260 "data_offset": 2048, 00:19:00.260 "data_size": 63488 00:19:00.260 }, 00:19:00.260 { 00:19:00.260 "name": null, 00:19:00.260 "uuid": "35d84ff6-9f00-582c-9d66-ed01e0770b4c", 00:19:00.260 "is_configured": false, 00:19:00.260 "data_offset": 2048, 00:19:00.260 "data_size": 63488 00:19:00.260 } 00:19:00.260 ] 00:19:00.260 }' 00:19:00.260 06:11:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.260 06:11:30 -- common/autotest_common.sh@10 -- # set +x 00:19:00.827 06:11:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:00.827 06:11:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:00.828 06:11:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.086 [2024-06-11 06:11:31.589804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.086 [2024-06-11 06:11:31.590051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.086 [2024-06-11 06:11:31.590129] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:01.087 [2024-06-11 06:11:31.590269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.087 [2024-06-11 06:11:31.590825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.087 [2024-06-11 06:11:31.590984] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.087 [2024-06-11 06:11:31.591186] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:01.087 [2024-06-11 06:11:31.591285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.087 pt2 00:19:01.087 06:11:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:01.087 06:11:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:01.087 06:11:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:01.346 [2024-06-11 06:11:31.857849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:01.346 [2024-06-11 06:11:31.858106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.346 [2024-06-11 06:11:31.858175] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:01.346 [2024-06-11 06:11:31.858271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.346 [2024-06-11 06:11:31.858888] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.346 [2024-06-11 06:11:31.859055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:01.346 [2024-06-11 06:11:31.859254] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:01.346 [2024-06-11 06:11:31.859344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:01.346 pt3 00:19:01.346 06:11:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:01.346 06:11:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:01.346 06:11:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:01.604 [2024-06-11 06:11:32.105897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:01.604 [2024-06-11 06:11:32.106172] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.605 [2024-06-11 06:11:32.106248] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:01.605 [2024-06-11 06:11:32.106353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.605 [2024-06-11 06:11:32.106857] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.605 [2024-06-11 06:11:32.107017] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:01.605 [2024-06-11 06:11:32.107215] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:01.605 [2024-06-11 06:11:32.107310] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:01.605 [2024-06-11 06:11:32.107527] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:01.605 [2024-06-11 06:11:32.107617] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:01.605 [2024-06-11 06:11:32.107751] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:01.605 [2024-06-11 06:11:32.108136] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:01.605 [2024-06-11 06:11:32.108174] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:01.605 [2024-06-11 06:11:32.108382] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.605 pt4 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.605 06:11:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.863 06:11:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.863 "name": "raid_bdev1", 00:19:01.863 "uuid": "51862a31-67c4-4555-b90a-5120644bc109", 00:19:01.863 "strip_size_kb": 64, 00:19:01.863 "state": "online", 00:19:01.863 "raid_level": "raid0", 00:19:01.863 "superblock": true, 00:19:01.863 "num_base_bdevs": 4, 00:19:01.863 "num_base_bdevs_discovered": 4, 00:19:01.863 "num_base_bdevs_operational": 4, 00:19:01.863 "base_bdevs_list": [ 00:19:01.863 { 00:19:01.863 "name": "pt1", 00:19:01.863 "uuid": "b8cbfe8a-6973-527b-9d18-16843dd0b9a8", 00:19:01.863 "is_configured": true, 00:19:01.863 "data_offset": 2048, 00:19:01.863 "data_size": 63488 00:19:01.863 }, 00:19:01.863 { 00:19:01.863 "name": "pt2", 00:19:01.863 "uuid": "4c11855b-998c-51af-9536-121d1035a81c", 00:19:01.863 "is_configured": true, 00:19:01.863 "data_offset": 2048, 00:19:01.863 "data_size": 63488 00:19:01.863 }, 00:19:01.863 { 00:19:01.863 "name": "pt3", 00:19:01.863 "uuid": "4bd7b952-0c97-597b-8fe9-f6775d34f2d2", 00:19:01.863 "is_configured": true, 00:19:01.863 "data_offset": 2048, 00:19:01.863 "data_size": 63488 00:19:01.863 }, 00:19:01.863 { 00:19:01.863 "name": "pt4", 00:19:01.863 "uuid": "35d84ff6-9f00-582c-9d66-ed01e0770b4c", 00:19:01.863 "is_configured": true, 00:19:01.863 "data_offset": 2048, 00:19:01.863 "data_size": 63488 00:19:01.863 } 00:19:01.863 ] 00:19:01.863 }' 00:19:01.863 06:11:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.863 06:11:32 -- common/autotest_common.sh@10 -- # set +x 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:02.431 [2024-06-11 06:11:32.970300] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@430 -- # '[' 51862a31-67c4-4555-b90a-5120644bc109 '!=' 51862a31-67c4-4555-b90a-5120644bc109 ']' 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:02.431 06:11:32 -- bdev/bdev_raid.sh@511 -- # killprocess 119973 00:19:02.431 06:11:32 -- common/autotest_common.sh@926 -- # '[' -z 119973 ']' 00:19:02.431 06:11:32 -- common/autotest_common.sh@930 -- # kill -0 119973 00:19:02.431 06:11:32 -- common/autotest_common.sh@931 -- # uname 00:19:02.431 06:11:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:02.431 06:11:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119973 00:19:02.431 06:11:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:02.431 06:11:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:02.431 06:11:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119973' 00:19:02.431 killing process with pid 119973 00:19:02.431 06:11:33 -- common/autotest_common.sh@945 -- # kill 119973 00:19:02.431 [2024-06-11 06:11:33.021727] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:02.431 06:11:33 -- common/autotest_common.sh@950 -- # wait 119973 00:19:02.431 [2024-06-11 06:11:33.021924] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.431 [2024-06-11 06:11:33.022004] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.431 [2024-06-11 06:11:33.022013] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:02.999 [2024-06-11 06:11:33.424130] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:04.377 00:19:04.377 real 0m11.332s 00:19:04.377 user 0m18.424s 00:19:04.377 sys 0m1.987s 00:19:04.377 06:11:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:04.377 06:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:04.377 ************************************ 00:19:04.377 END TEST raid_superblock_test 00:19:04.377 ************************************ 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:19:04.377 06:11:34 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:04.377 06:11:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:04.377 06:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:04.377 ************************************ 00:19:04.377 START TEST raid_state_function_test 00:19:04.377 ************************************ 00:19:04.377 06:11:34 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=120289 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120289' 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:04.377 Process raid pid: 120289 00:19:04.377 06:11:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120289 /var/tmp/spdk-raid.sock 00:19:04.377 06:11:34 -- common/autotest_common.sh@819 -- # '[' -z 120289 ']' 00:19:04.377 06:11:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:04.377 06:11:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:04.377 06:11:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:04.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:04.377 06:11:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:04.377 06:11:34 -- common/autotest_common.sh@10 -- # set +x 00:19:04.377 [2024-06-11 06:11:34.952282] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:04.377 [2024-06-11 06:11:34.952776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.636 [2024-06-11 06:11:35.136035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.895 [2024-06-11 06:11:35.369722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.154 [2024-06-11 06:11:35.616997] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.414 06:11:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:05.414 06:11:35 -- common/autotest_common.sh@852 -- # return 0 00:19:05.414 06:11:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:05.414 [2024-06-11 06:11:36.030384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:05.414 [2024-06-11 06:11:36.030665] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:05.414 [2024-06-11 06:11:36.030756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:05.414 [2024-06-11 06:11:36.030812] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:05.414 [2024-06-11 06:11:36.030839] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:05.414 [2024-06-11 06:11:36.030896] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:05.414 [2024-06-11 06:11:36.030978] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:05.414 [2024-06-11 06:11:36.031029] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.414 06:11:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.673 06:11:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.673 "name": "Existed_Raid", 00:19:05.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.673 "strip_size_kb": 64, 00:19:05.673 "state": "configuring", 00:19:05.673 "raid_level": "concat", 00:19:05.673 "superblock": false, 00:19:05.673 "num_base_bdevs": 4, 00:19:05.673 "num_base_bdevs_discovered": 0, 00:19:05.673 "num_base_bdevs_operational": 4, 00:19:05.673 "base_bdevs_list": [ 00:19:05.673 { 00:19:05.673 "name": "BaseBdev1", 00:19:05.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.673 "is_configured": false, 00:19:05.673 "data_offset": 0, 00:19:05.673 "data_size": 0 00:19:05.673 }, 00:19:05.673 { 00:19:05.673 "name": "BaseBdev2", 00:19:05.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.673 "is_configured": false, 00:19:05.673 "data_offset": 0, 00:19:05.673 "data_size": 0 00:19:05.673 }, 00:19:05.673 { 00:19:05.673 "name": "BaseBdev3", 00:19:05.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.673 "is_configured": false, 00:19:05.673 "data_offset": 0, 00:19:05.673 "data_size": 0 00:19:05.673 }, 00:19:05.673 { 00:19:05.673 "name": "BaseBdev4", 00:19:05.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.673 "is_configured": false, 00:19:05.673 "data_offset": 0, 00:19:05.673 "data_size": 0 00:19:05.673 } 00:19:05.673 ] 00:19:05.673 }' 00:19:05.673 06:11:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.673 06:11:36 -- common/autotest_common.sh@10 -- # set +x 00:19:06.240 06:11:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:06.498 [2024-06-11 06:11:36.954460] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.498 [2024-06-11 06:11:36.954685] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:06.498 06:11:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:06.759 [2024-06-11 06:11:37.206548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.759 [2024-06-11 06:11:37.206798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.759 [2024-06-11 06:11:37.206899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.759 [2024-06-11 06:11:37.206956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.759 [2024-06-11 06:11:37.206984] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.759 [2024-06-11 06:11:37.207044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.759 [2024-06-11 06:11:37.207123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:06.759 [2024-06-11 06:11:37.207175] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:06.759 06:11:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:07.032 [2024-06-11 06:11:37.424983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.032 BaseBdev1 00:19:07.032 06:11:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:07.032 06:11:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:07.032 06:11:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:07.032 06:11:37 -- common/autotest_common.sh@889 -- # local i 00:19:07.032 06:11:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:07.032 06:11:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:07.032 06:11:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.300 06:11:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:07.300 [ 00:19:07.300 { 00:19:07.300 "name": "BaseBdev1", 00:19:07.300 "aliases": [ 00:19:07.300 "9b75f7a0-f4e2-417e-a66d-4cd46382f177" 00:19:07.300 ], 00:19:07.300 "product_name": "Malloc disk", 00:19:07.300 "block_size": 512, 00:19:07.300 "num_blocks": 65536, 00:19:07.300 "uuid": "9b75f7a0-f4e2-417e-a66d-4cd46382f177", 00:19:07.301 "assigned_rate_limits": { 00:19:07.301 "rw_ios_per_sec": 0, 00:19:07.301 "rw_mbytes_per_sec": 0, 00:19:07.301 "r_mbytes_per_sec": 0, 00:19:07.301 "w_mbytes_per_sec": 0 00:19:07.301 }, 00:19:07.301 "claimed": true, 00:19:07.301 "claim_type": "exclusive_write", 00:19:07.301 "zoned": false, 00:19:07.301 "supported_io_types": { 00:19:07.301 "read": true, 00:19:07.301 "write": true, 00:19:07.301 "unmap": true, 00:19:07.301 "write_zeroes": true, 00:19:07.301 "flush": true, 00:19:07.301 "reset": true, 00:19:07.301 "compare": false, 00:19:07.301 "compare_and_write": false, 00:19:07.301 "abort": true, 00:19:07.301 "nvme_admin": false, 00:19:07.301 "nvme_io": false 00:19:07.301 }, 00:19:07.301 "memory_domains": [ 00:19:07.301 { 00:19:07.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.301 "dma_device_type": 2 00:19:07.301 } 00:19:07.301 ], 00:19:07.301 "driver_specific": {} 00:19:07.301 } 00:19:07.301 ] 00:19:07.301 06:11:37 -- common/autotest_common.sh@895 -- # return 0 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.301 06:11:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.559 06:11:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.559 "name": "Existed_Raid", 00:19:07.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.559 "strip_size_kb": 64, 00:19:07.559 "state": "configuring", 00:19:07.559 "raid_level": "concat", 00:19:07.559 "superblock": false, 00:19:07.559 "num_base_bdevs": 4, 00:19:07.559 "num_base_bdevs_discovered": 1, 00:19:07.559 "num_base_bdevs_operational": 4, 00:19:07.559 "base_bdevs_list": [ 00:19:07.559 { 00:19:07.559 "name": "BaseBdev1", 00:19:07.559 "uuid": "9b75f7a0-f4e2-417e-a66d-4cd46382f177", 00:19:07.559 "is_configured": true, 00:19:07.559 "data_offset": 0, 00:19:07.559 "data_size": 65536 00:19:07.559 }, 00:19:07.559 { 00:19:07.559 "name": "BaseBdev2", 00:19:07.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.559 "is_configured": false, 00:19:07.559 "data_offset": 0, 00:19:07.559 "data_size": 0 00:19:07.559 }, 00:19:07.559 { 00:19:07.559 "name": "BaseBdev3", 00:19:07.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.559 "is_configured": false, 00:19:07.559 "data_offset": 0, 00:19:07.559 "data_size": 0 00:19:07.559 }, 00:19:07.559 { 00:19:07.559 "name": "BaseBdev4", 00:19:07.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.559 "is_configured": false, 00:19:07.559 "data_offset": 0, 00:19:07.559 "data_size": 0 00:19:07.559 } 00:19:07.559 ] 00:19:07.559 }' 00:19:07.559 06:11:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.559 06:11:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.125 06:11:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:08.125 [2024-06-11 06:11:38.685291] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.125 [2024-06-11 06:11:38.685505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:08.125 06:11:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:08.125 06:11:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:08.383 [2024-06-11 06:11:38.861409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:08.383 [2024-06-11 06:11:38.863887] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.383 [2024-06-11 06:11:38.864103] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.383 [2024-06-11 06:11:38.864206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.383 [2024-06-11 06:11:38.864269] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.383 [2024-06-11 06:11:38.864345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:08.383 [2024-06-11 06:11:38.864392] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:08.383 06:11:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:08.383 06:11:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:08.383 06:11:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:08.383 06:11:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:08.383 06:11:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:08.383 06:11:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.384 06:11:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.642 06:11:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.642 "name": "Existed_Raid", 00:19:08.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.642 "strip_size_kb": 64, 00:19:08.642 "state": "configuring", 00:19:08.642 "raid_level": "concat", 00:19:08.642 "superblock": false, 00:19:08.642 "num_base_bdevs": 4, 00:19:08.642 "num_base_bdevs_discovered": 1, 00:19:08.642 "num_base_bdevs_operational": 4, 00:19:08.642 "base_bdevs_list": [ 00:19:08.642 { 00:19:08.642 "name": "BaseBdev1", 00:19:08.642 "uuid": "9b75f7a0-f4e2-417e-a66d-4cd46382f177", 00:19:08.642 "is_configured": true, 00:19:08.642 "data_offset": 0, 00:19:08.642 "data_size": 65536 00:19:08.642 }, 00:19:08.642 { 00:19:08.642 "name": "BaseBdev2", 00:19:08.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.642 "is_configured": false, 00:19:08.642 "data_offset": 0, 00:19:08.642 "data_size": 0 00:19:08.642 }, 00:19:08.642 { 00:19:08.642 "name": "BaseBdev3", 00:19:08.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.642 "is_configured": false, 00:19:08.642 "data_offset": 0, 00:19:08.642 "data_size": 0 00:19:08.642 }, 00:19:08.642 { 00:19:08.642 "name": "BaseBdev4", 00:19:08.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.642 "is_configured": false, 00:19:08.642 "data_offset": 0, 00:19:08.642 "data_size": 0 00:19:08.642 } 00:19:08.642 ] 00:19:08.642 }' 00:19:08.642 06:11:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.642 06:11:39 -- common/autotest_common.sh@10 -- # set +x 00:19:09.209 06:11:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:09.209 [2024-06-11 06:11:39.828376] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.209 BaseBdev2 00:19:09.209 06:11:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:09.209 06:11:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:09.209 06:11:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:09.209 06:11:39 -- common/autotest_common.sh@889 -- # local i 00:19:09.209 06:11:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:09.209 06:11:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:09.209 06:11:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.467 06:11:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.726 [ 00:19:09.726 { 00:19:09.726 "name": "BaseBdev2", 00:19:09.726 "aliases": [ 00:19:09.726 "1f30453c-ea0e-4d26-8c0e-474f66c2dea4" 00:19:09.726 ], 00:19:09.726 "product_name": "Malloc disk", 00:19:09.726 "block_size": 512, 00:19:09.726 "num_blocks": 65536, 00:19:09.726 "uuid": "1f30453c-ea0e-4d26-8c0e-474f66c2dea4", 00:19:09.726 "assigned_rate_limits": { 00:19:09.726 "rw_ios_per_sec": 0, 00:19:09.726 "rw_mbytes_per_sec": 0, 00:19:09.726 "r_mbytes_per_sec": 0, 00:19:09.726 "w_mbytes_per_sec": 0 00:19:09.726 }, 00:19:09.726 "claimed": true, 00:19:09.726 "claim_type": "exclusive_write", 00:19:09.726 "zoned": false, 00:19:09.726 "supported_io_types": { 00:19:09.726 "read": true, 00:19:09.726 "write": true, 00:19:09.726 "unmap": true, 00:19:09.726 "write_zeroes": true, 00:19:09.726 "flush": true, 00:19:09.726 "reset": true, 00:19:09.726 "compare": false, 00:19:09.726 "compare_and_write": false, 00:19:09.726 "abort": true, 00:19:09.726 "nvme_admin": false, 00:19:09.726 "nvme_io": false 00:19:09.726 }, 00:19:09.726 "memory_domains": [ 00:19:09.726 { 00:19:09.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.726 "dma_device_type": 2 00:19:09.726 } 00:19:09.726 ], 00:19:09.726 "driver_specific": {} 00:19:09.726 } 00:19:09.726 ] 00:19:09.726 06:11:40 -- common/autotest_common.sh@895 -- # return 0 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.726 06:11:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.985 06:11:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.985 "name": "Existed_Raid", 00:19:09.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.985 "strip_size_kb": 64, 00:19:09.985 "state": "configuring", 00:19:09.985 "raid_level": "concat", 00:19:09.985 "superblock": false, 00:19:09.985 "num_base_bdevs": 4, 00:19:09.985 "num_base_bdevs_discovered": 2, 00:19:09.985 "num_base_bdevs_operational": 4, 00:19:09.985 "base_bdevs_list": [ 00:19:09.985 { 00:19:09.985 "name": "BaseBdev1", 00:19:09.985 "uuid": "9b75f7a0-f4e2-417e-a66d-4cd46382f177", 00:19:09.985 "is_configured": true, 00:19:09.985 "data_offset": 0, 00:19:09.985 "data_size": 65536 00:19:09.985 }, 00:19:09.985 { 00:19:09.985 "name": "BaseBdev2", 00:19:09.985 "uuid": "1f30453c-ea0e-4d26-8c0e-474f66c2dea4", 00:19:09.985 "is_configured": true, 00:19:09.985 "data_offset": 0, 00:19:09.985 "data_size": 65536 00:19:09.985 }, 00:19:09.985 { 00:19:09.985 "name": "BaseBdev3", 00:19:09.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.985 "is_configured": false, 00:19:09.985 "data_offset": 0, 00:19:09.985 "data_size": 0 00:19:09.985 }, 00:19:09.985 { 00:19:09.985 "name": "BaseBdev4", 00:19:09.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.985 "is_configured": false, 00:19:09.985 "data_offset": 0, 00:19:09.985 "data_size": 0 00:19:09.985 } 00:19:09.985 ] 00:19:09.985 }' 00:19:09.985 06:11:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.985 06:11:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.554 06:11:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:10.813 [2024-06-11 06:11:41.311455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.813 BaseBdev3 00:19:10.813 06:11:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:10.813 06:11:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:10.813 06:11:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:10.813 06:11:41 -- common/autotest_common.sh@889 -- # local i 00:19:10.813 06:11:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:10.813 06:11:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:10.813 06:11:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.071 06:11:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:11.329 [ 00:19:11.329 { 00:19:11.329 "name": "BaseBdev3", 00:19:11.329 "aliases": [ 00:19:11.329 "8ae4a1b8-241f-4ef3-97af-e26a525ad333" 00:19:11.329 ], 00:19:11.329 "product_name": "Malloc disk", 00:19:11.329 "block_size": 512, 00:19:11.329 "num_blocks": 65536, 00:19:11.329 "uuid": "8ae4a1b8-241f-4ef3-97af-e26a525ad333", 00:19:11.329 "assigned_rate_limits": { 00:19:11.329 "rw_ios_per_sec": 0, 00:19:11.329 "rw_mbytes_per_sec": 0, 00:19:11.329 "r_mbytes_per_sec": 0, 00:19:11.329 "w_mbytes_per_sec": 0 00:19:11.329 }, 00:19:11.329 "claimed": true, 00:19:11.329 "claim_type": "exclusive_write", 00:19:11.329 "zoned": false, 00:19:11.329 "supported_io_types": { 00:19:11.329 "read": true, 00:19:11.329 "write": true, 00:19:11.329 "unmap": true, 00:19:11.329 "write_zeroes": true, 00:19:11.329 "flush": true, 00:19:11.329 "reset": true, 00:19:11.329 "compare": false, 00:19:11.329 "compare_and_write": false, 00:19:11.329 "abort": true, 00:19:11.329 "nvme_admin": false, 00:19:11.329 "nvme_io": false 00:19:11.329 }, 00:19:11.329 "memory_domains": [ 00:19:11.329 { 00:19:11.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.329 "dma_device_type": 2 00:19:11.329 } 00:19:11.329 ], 00:19:11.329 "driver_specific": {} 00:19:11.329 } 00:19:11.329 ] 00:19:11.329 06:11:41 -- common/autotest_common.sh@895 -- # return 0 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.329 06:11:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.587 06:11:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.587 "name": "Existed_Raid", 00:19:11.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.587 "strip_size_kb": 64, 00:19:11.587 "state": "configuring", 00:19:11.587 "raid_level": "concat", 00:19:11.587 "superblock": false, 00:19:11.587 "num_base_bdevs": 4, 00:19:11.587 "num_base_bdevs_discovered": 3, 00:19:11.587 "num_base_bdevs_operational": 4, 00:19:11.587 "base_bdevs_list": [ 00:19:11.587 { 00:19:11.587 "name": "BaseBdev1", 00:19:11.587 "uuid": "9b75f7a0-f4e2-417e-a66d-4cd46382f177", 00:19:11.587 "is_configured": true, 00:19:11.587 "data_offset": 0, 00:19:11.587 "data_size": 65536 00:19:11.587 }, 00:19:11.587 { 00:19:11.587 "name": "BaseBdev2", 00:19:11.587 "uuid": "1f30453c-ea0e-4d26-8c0e-474f66c2dea4", 00:19:11.587 "is_configured": true, 00:19:11.587 "data_offset": 0, 00:19:11.587 "data_size": 65536 00:19:11.587 }, 00:19:11.587 { 00:19:11.587 "name": "BaseBdev3", 00:19:11.587 "uuid": "8ae4a1b8-241f-4ef3-97af-e26a525ad333", 00:19:11.587 "is_configured": true, 00:19:11.587 "data_offset": 0, 00:19:11.587 "data_size": 65536 00:19:11.587 }, 00:19:11.587 { 00:19:11.587 "name": "BaseBdev4", 00:19:11.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.587 "is_configured": false, 00:19:11.587 "data_offset": 0, 00:19:11.587 "data_size": 0 00:19:11.587 } 00:19:11.587 ] 00:19:11.587 }' 00:19:11.587 06:11:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.587 06:11:42 -- common/autotest_common.sh@10 -- # set +x 00:19:12.154 06:11:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:12.412 [2024-06-11 06:11:42.892866] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:12.412 [2024-06-11 06:11:42.893159] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:12.412 [2024-06-11 06:11:42.893201] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:19:12.412 [2024-06-11 06:11:42.893457] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:12.412 [2024-06-11 06:11:42.893952] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:12.412 [2024-06-11 06:11:42.894066] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:12.412 [2024-06-11 06:11:42.894412] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:12.412 BaseBdev4 00:19:12.412 06:11:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:12.412 06:11:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:12.412 06:11:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:12.412 06:11:42 -- common/autotest_common.sh@889 -- # local i 00:19:12.412 06:11:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:12.412 06:11:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:12.412 06:11:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.670 06:11:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:12.928 [ 00:19:12.928 { 00:19:12.928 "name": "BaseBdev4", 00:19:12.928 "aliases": [ 00:19:12.928 "609ffdc2-0904-4083-89dd-19cbe305afcc" 00:19:12.928 ], 00:19:12.928 "product_name": "Malloc disk", 00:19:12.928 "block_size": 512, 00:19:12.928 "num_blocks": 65536, 00:19:12.928 "uuid": "609ffdc2-0904-4083-89dd-19cbe305afcc", 00:19:12.928 "assigned_rate_limits": { 00:19:12.928 "rw_ios_per_sec": 0, 00:19:12.928 "rw_mbytes_per_sec": 0, 00:19:12.928 "r_mbytes_per_sec": 0, 00:19:12.928 "w_mbytes_per_sec": 0 00:19:12.928 }, 00:19:12.928 "claimed": true, 00:19:12.928 "claim_type": "exclusive_write", 00:19:12.928 "zoned": false, 00:19:12.928 "supported_io_types": { 00:19:12.928 "read": true, 00:19:12.928 "write": true, 00:19:12.928 "unmap": true, 00:19:12.928 "write_zeroes": true, 00:19:12.928 "flush": true, 00:19:12.928 "reset": true, 00:19:12.928 "compare": false, 00:19:12.928 "compare_and_write": false, 00:19:12.928 "abort": true, 00:19:12.928 "nvme_admin": false, 00:19:12.928 "nvme_io": false 00:19:12.928 }, 00:19:12.928 "memory_domains": [ 00:19:12.928 { 00:19:12.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.928 "dma_device_type": 2 00:19:12.928 } 00:19:12.928 ], 00:19:12.928 "driver_specific": {} 00:19:12.928 } 00:19:12.928 ] 00:19:12.928 06:11:43 -- common/autotest_common.sh@895 -- # return 0 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.928 06:11:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.186 06:11:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.186 "name": "Existed_Raid", 00:19:13.186 "uuid": "bf146ca0-969a-4111-a40e-92368cfeaba4", 00:19:13.186 "strip_size_kb": 64, 00:19:13.186 "state": "online", 00:19:13.186 "raid_level": "concat", 00:19:13.186 "superblock": false, 00:19:13.186 "num_base_bdevs": 4, 00:19:13.186 "num_base_bdevs_discovered": 4, 00:19:13.186 "num_base_bdevs_operational": 4, 00:19:13.186 "base_bdevs_list": [ 00:19:13.186 { 00:19:13.186 "name": "BaseBdev1", 00:19:13.186 "uuid": "9b75f7a0-f4e2-417e-a66d-4cd46382f177", 00:19:13.186 "is_configured": true, 00:19:13.186 "data_offset": 0, 00:19:13.186 "data_size": 65536 00:19:13.186 }, 00:19:13.186 { 00:19:13.186 "name": "BaseBdev2", 00:19:13.186 "uuid": "1f30453c-ea0e-4d26-8c0e-474f66c2dea4", 00:19:13.186 "is_configured": true, 00:19:13.186 "data_offset": 0, 00:19:13.186 "data_size": 65536 00:19:13.186 }, 00:19:13.186 { 00:19:13.186 "name": "BaseBdev3", 00:19:13.186 "uuid": "8ae4a1b8-241f-4ef3-97af-e26a525ad333", 00:19:13.186 "is_configured": true, 00:19:13.186 "data_offset": 0, 00:19:13.186 "data_size": 65536 00:19:13.186 }, 00:19:13.186 { 00:19:13.186 "name": "BaseBdev4", 00:19:13.186 "uuid": "609ffdc2-0904-4083-89dd-19cbe305afcc", 00:19:13.186 "is_configured": true, 00:19:13.186 "data_offset": 0, 00:19:13.186 "data_size": 65536 00:19:13.186 } 00:19:13.186 ] 00:19:13.186 }' 00:19:13.186 06:11:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.186 06:11:43 -- common/autotest_common.sh@10 -- # set +x 00:19:13.753 06:11:44 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:13.753 [2024-06-11 06:11:44.357291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.753 [2024-06-11 06:11:44.357506] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.753 [2024-06-11 06:11:44.357703] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:14.011 06:11:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.012 06:11:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.270 06:11:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:14.270 "name": "Existed_Raid", 00:19:14.270 "uuid": "bf146ca0-969a-4111-a40e-92368cfeaba4", 00:19:14.270 "strip_size_kb": 64, 00:19:14.270 "state": "offline", 00:19:14.270 "raid_level": "concat", 00:19:14.270 "superblock": false, 00:19:14.270 "num_base_bdevs": 4, 00:19:14.270 "num_base_bdevs_discovered": 3, 00:19:14.270 "num_base_bdevs_operational": 3, 00:19:14.270 "base_bdevs_list": [ 00:19:14.270 { 00:19:14.270 "name": null, 00:19:14.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.270 "is_configured": false, 00:19:14.270 "data_offset": 0, 00:19:14.270 "data_size": 65536 00:19:14.270 }, 00:19:14.270 { 00:19:14.270 "name": "BaseBdev2", 00:19:14.270 "uuid": "1f30453c-ea0e-4d26-8c0e-474f66c2dea4", 00:19:14.270 "is_configured": true, 00:19:14.270 "data_offset": 0, 00:19:14.270 "data_size": 65536 00:19:14.270 }, 00:19:14.270 { 00:19:14.270 "name": "BaseBdev3", 00:19:14.270 "uuid": "8ae4a1b8-241f-4ef3-97af-e26a525ad333", 00:19:14.270 "is_configured": true, 00:19:14.270 "data_offset": 0, 00:19:14.270 "data_size": 65536 00:19:14.270 }, 00:19:14.270 { 00:19:14.270 "name": "BaseBdev4", 00:19:14.270 "uuid": "609ffdc2-0904-4083-89dd-19cbe305afcc", 00:19:14.270 "is_configured": true, 00:19:14.270 "data_offset": 0, 00:19:14.270 "data_size": 65536 00:19:14.270 } 00:19:14.270 ] 00:19:14.270 }' 00:19:14.270 06:11:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:14.270 06:11:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.838 06:11:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:14.838 06:11:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:14.838 06:11:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.838 06:11:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:15.097 06:11:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:15.097 06:11:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.097 06:11:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:15.356 [2024-06-11 06:11:45.802015] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:15.356 06:11:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:15.356 06:11:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:15.356 06:11:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.356 06:11:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:15.615 06:11:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:15.615 06:11:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.615 06:11:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:15.874 [2024-06-11 06:11:46.330643] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:15.874 06:11:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:15.874 06:11:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:15.874 06:11:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.874 06:11:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:16.133 06:11:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:16.133 06:11:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:16.133 06:11:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:16.392 [2024-06-11 06:11:46.788291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:16.392 [2024-06-11 06:11:46.788535] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:16.392 06:11:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:16.392 06:11:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:16.392 06:11:46 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.392 06:11:46 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:16.650 06:11:47 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:16.650 06:11:47 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:16.650 06:11:47 -- bdev/bdev_raid.sh@287 -- # killprocess 120289 00:19:16.650 06:11:47 -- common/autotest_common.sh@926 -- # '[' -z 120289 ']' 00:19:16.650 06:11:47 -- common/autotest_common.sh@930 -- # kill -0 120289 00:19:16.650 06:11:47 -- common/autotest_common.sh@931 -- # uname 00:19:16.650 06:11:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:16.650 06:11:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120289 00:19:16.650 06:11:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:16.650 06:11:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:16.650 06:11:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120289' 00:19:16.650 killing process with pid 120289 00:19:16.650 06:11:47 -- common/autotest_common.sh@945 -- # kill 120289 00:19:16.650 [2024-06-11 06:11:47.110514] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:16.650 06:11:47 -- common/autotest_common.sh@950 -- # wait 120289 00:19:16.650 [2024-06-11 06:11:47.110765] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.027 ************************************ 00:19:18.027 END TEST raid_state_function_test 00:19:18.027 ************************************ 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:18.027 00:19:18.027 real 0m13.632s 00:19:18.027 user 0m22.813s 00:19:18.027 sys 0m2.405s 00:19:18.027 06:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.027 06:11:48 -- common/autotest_common.sh@10 -- # set +x 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:18.027 06:11:48 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:18.027 06:11:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:18.027 06:11:48 -- common/autotest_common.sh@10 -- # set +x 00:19:18.027 ************************************ 00:19:18.027 START TEST raid_state_function_test_sb 00:19:18.027 ************************************ 00:19:18.027 06:11:48 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:18.027 06:11:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@226 -- # raid_pid=120720 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:18.028 Process raid pid: 120720 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120720' 00:19:18.028 06:11:48 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120720 /var/tmp/spdk-raid.sock 00:19:18.028 06:11:48 -- common/autotest_common.sh@819 -- # '[' -z 120720 ']' 00:19:18.028 06:11:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:18.028 06:11:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:18.028 06:11:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:18.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:18.028 06:11:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:18.028 06:11:48 -- common/autotest_common.sh@10 -- # set +x 00:19:18.028 [2024-06-11 06:11:48.635685] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:18.028 [2024-06-11 06:11:48.636038] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.287 [2024-06-11 06:11:48.800320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.546 [2024-06-11 06:11:49.034895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.805 [2024-06-11 06:11:49.281742] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.067 06:11:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:19.067 06:11:49 -- common/autotest_common.sh@852 -- # return 0 00:19:19.067 06:11:49 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:19.326 [2024-06-11 06:11:49.759308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:19.326 [2024-06-11 06:11:49.759572] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:19.326 [2024-06-11 06:11:49.759672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:19.326 [2024-06-11 06:11:49.759730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:19.326 [2024-06-11 06:11:49.759756] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:19.326 [2024-06-11 06:11:49.759857] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:19.326 [2024-06-11 06:11:49.759889] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:19.326 [2024-06-11 06:11:49.759932] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.326 "name": "Existed_Raid", 00:19:19.326 "uuid": "9e97bbaf-cf28-47f3-be6b-255d0c26e2ef", 00:19:19.326 "strip_size_kb": 64, 00:19:19.326 "state": "configuring", 00:19:19.326 "raid_level": "concat", 00:19:19.326 "superblock": true, 00:19:19.326 "num_base_bdevs": 4, 00:19:19.326 "num_base_bdevs_discovered": 0, 00:19:19.326 "num_base_bdevs_operational": 4, 00:19:19.326 "base_bdevs_list": [ 00:19:19.326 { 00:19:19.326 "name": "BaseBdev1", 00:19:19.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.326 "is_configured": false, 00:19:19.326 "data_offset": 0, 00:19:19.326 "data_size": 0 00:19:19.326 }, 00:19:19.326 { 00:19:19.326 "name": "BaseBdev2", 00:19:19.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.326 "is_configured": false, 00:19:19.326 "data_offset": 0, 00:19:19.326 "data_size": 0 00:19:19.326 }, 00:19:19.326 { 00:19:19.326 "name": "BaseBdev3", 00:19:19.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.326 "is_configured": false, 00:19:19.326 "data_offset": 0, 00:19:19.326 "data_size": 0 00:19:19.326 }, 00:19:19.326 { 00:19:19.326 "name": "BaseBdev4", 00:19:19.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.326 "is_configured": false, 00:19:19.326 "data_offset": 0, 00:19:19.326 "data_size": 0 00:19:19.326 } 00:19:19.326 ] 00:19:19.326 }' 00:19:19.326 06:11:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.326 06:11:49 -- common/autotest_common.sh@10 -- # set +x 00:19:20.263 06:11:50 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:20.263 [2024-06-11 06:11:50.707352] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:20.263 [2024-06-11 06:11:50.707579] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:20.263 06:11:50 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:20.522 [2024-06-11 06:11:50.963500] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.522 [2024-06-11 06:11:50.963719] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.522 [2024-06-11 06:11:50.963818] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:20.522 [2024-06-11 06:11:50.963876] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:20.522 [2024-06-11 06:11:50.963902] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:20.522 [2024-06-11 06:11:50.963961] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:20.522 [2024-06-11 06:11:50.964037] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:20.522 [2024-06-11 06:11:50.964088] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:20.522 06:11:50 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:20.781 [2024-06-11 06:11:51.180048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:20.781 BaseBdev1 00:19:20.781 06:11:51 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:20.781 06:11:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:20.781 06:11:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:20.781 06:11:51 -- common/autotest_common.sh@889 -- # local i 00:19:20.781 06:11:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:20.781 06:11:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:20.781 06:11:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.781 06:11:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:21.041 [ 00:19:21.041 { 00:19:21.041 "name": "BaseBdev1", 00:19:21.041 "aliases": [ 00:19:21.041 "320e3a27-537f-43b0-a4b1-79b06718a382" 00:19:21.041 ], 00:19:21.041 "product_name": "Malloc disk", 00:19:21.041 "block_size": 512, 00:19:21.041 "num_blocks": 65536, 00:19:21.041 "uuid": "320e3a27-537f-43b0-a4b1-79b06718a382", 00:19:21.041 "assigned_rate_limits": { 00:19:21.041 "rw_ios_per_sec": 0, 00:19:21.041 "rw_mbytes_per_sec": 0, 00:19:21.041 "r_mbytes_per_sec": 0, 00:19:21.041 "w_mbytes_per_sec": 0 00:19:21.041 }, 00:19:21.041 "claimed": true, 00:19:21.041 "claim_type": "exclusive_write", 00:19:21.041 "zoned": false, 00:19:21.041 "supported_io_types": { 00:19:21.041 "read": true, 00:19:21.041 "write": true, 00:19:21.041 "unmap": true, 00:19:21.041 "write_zeroes": true, 00:19:21.041 "flush": true, 00:19:21.041 "reset": true, 00:19:21.041 "compare": false, 00:19:21.041 "compare_and_write": false, 00:19:21.041 "abort": true, 00:19:21.041 "nvme_admin": false, 00:19:21.041 "nvme_io": false 00:19:21.041 }, 00:19:21.041 "memory_domains": [ 00:19:21.041 { 00:19:21.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.041 "dma_device_type": 2 00:19:21.041 } 00:19:21.041 ], 00:19:21.041 "driver_specific": {} 00:19:21.041 } 00:19:21.041 ] 00:19:21.041 06:11:51 -- common/autotest_common.sh@895 -- # return 0 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.041 06:11:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.300 06:11:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:21.300 "name": "Existed_Raid", 00:19:21.300 "uuid": "63087b66-44a0-4afe-94f4-208934cb6fd9", 00:19:21.300 "strip_size_kb": 64, 00:19:21.300 "state": "configuring", 00:19:21.300 "raid_level": "concat", 00:19:21.300 "superblock": true, 00:19:21.300 "num_base_bdevs": 4, 00:19:21.300 "num_base_bdevs_discovered": 1, 00:19:21.300 "num_base_bdevs_operational": 4, 00:19:21.300 "base_bdevs_list": [ 00:19:21.300 { 00:19:21.300 "name": "BaseBdev1", 00:19:21.300 "uuid": "320e3a27-537f-43b0-a4b1-79b06718a382", 00:19:21.300 "is_configured": true, 00:19:21.300 "data_offset": 2048, 00:19:21.300 "data_size": 63488 00:19:21.300 }, 00:19:21.300 { 00:19:21.300 "name": "BaseBdev2", 00:19:21.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.300 "is_configured": false, 00:19:21.300 "data_offset": 0, 00:19:21.300 "data_size": 0 00:19:21.300 }, 00:19:21.300 { 00:19:21.300 "name": "BaseBdev3", 00:19:21.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.300 "is_configured": false, 00:19:21.300 "data_offset": 0, 00:19:21.300 "data_size": 0 00:19:21.300 }, 00:19:21.300 { 00:19:21.300 "name": "BaseBdev4", 00:19:21.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.300 "is_configured": false, 00:19:21.300 "data_offset": 0, 00:19:21.300 "data_size": 0 00:19:21.300 } 00:19:21.300 ] 00:19:21.300 }' 00:19:21.300 06:11:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:21.300 06:11:51 -- common/autotest_common.sh@10 -- # set +x 00:19:21.868 06:11:52 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:21.868 [2024-06-11 06:11:52.420299] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:21.868 [2024-06-11 06:11:52.420566] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:21.868 06:11:52 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:21.868 06:11:52 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:22.436 06:11:52 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:22.695 BaseBdev1 00:19:22.695 06:11:53 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:22.695 06:11:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:22.695 06:11:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:22.695 06:11:53 -- common/autotest_common.sh@889 -- # local i 00:19:22.695 06:11:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:22.695 06:11:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:22.695 06:11:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.695 06:11:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:22.988 [ 00:19:22.988 { 00:19:22.988 "name": "BaseBdev1", 00:19:22.988 "aliases": [ 00:19:22.988 "793f80be-0d29-4b47-b65d-0048307d5a3b" 00:19:22.988 ], 00:19:22.988 "product_name": "Malloc disk", 00:19:22.988 "block_size": 512, 00:19:22.988 "num_blocks": 65536, 00:19:22.988 "uuid": "793f80be-0d29-4b47-b65d-0048307d5a3b", 00:19:22.988 "assigned_rate_limits": { 00:19:22.988 "rw_ios_per_sec": 0, 00:19:22.988 "rw_mbytes_per_sec": 0, 00:19:22.988 "r_mbytes_per_sec": 0, 00:19:22.988 "w_mbytes_per_sec": 0 00:19:22.988 }, 00:19:22.988 "claimed": false, 00:19:22.988 "zoned": false, 00:19:22.988 "supported_io_types": { 00:19:22.988 "read": true, 00:19:22.988 "write": true, 00:19:22.988 "unmap": true, 00:19:22.988 "write_zeroes": true, 00:19:22.988 "flush": true, 00:19:22.988 "reset": true, 00:19:22.988 "compare": false, 00:19:22.988 "compare_and_write": false, 00:19:22.988 "abort": true, 00:19:22.988 "nvme_admin": false, 00:19:22.988 "nvme_io": false 00:19:22.988 }, 00:19:22.988 "memory_domains": [ 00:19:22.988 { 00:19:22.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.988 "dma_device_type": 2 00:19:22.988 } 00:19:22.988 ], 00:19:22.988 "driver_specific": {} 00:19:22.988 } 00:19:22.988 ] 00:19:22.988 06:11:53 -- common/autotest_common.sh@895 -- # return 0 00:19:22.988 06:11:53 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:23.247 [2024-06-11 06:11:53.669811] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.247 [2024-06-11 06:11:53.672247] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.247 [2024-06-11 06:11:53.672457] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.247 [2024-06-11 06:11:53.672573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:23.247 [2024-06-11 06:11:53.672635] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:23.247 [2024-06-11 06:11:53.672662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:23.247 [2024-06-11 06:11:53.672700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.247 "name": "Existed_Raid", 00:19:23.247 "uuid": "0828c919-3d30-4ec2-94c8-89140435163a", 00:19:23.247 "strip_size_kb": 64, 00:19:23.247 "state": "configuring", 00:19:23.247 "raid_level": "concat", 00:19:23.247 "superblock": true, 00:19:23.247 "num_base_bdevs": 4, 00:19:23.247 "num_base_bdevs_discovered": 1, 00:19:23.247 "num_base_bdevs_operational": 4, 00:19:23.247 "base_bdevs_list": [ 00:19:23.247 { 00:19:23.247 "name": "BaseBdev1", 00:19:23.247 "uuid": "793f80be-0d29-4b47-b65d-0048307d5a3b", 00:19:23.247 "is_configured": true, 00:19:23.247 "data_offset": 2048, 00:19:23.247 "data_size": 63488 00:19:23.247 }, 00:19:23.247 { 00:19:23.247 "name": "BaseBdev2", 00:19:23.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.247 "is_configured": false, 00:19:23.247 "data_offset": 0, 00:19:23.247 "data_size": 0 00:19:23.247 }, 00:19:23.247 { 00:19:23.247 "name": "BaseBdev3", 00:19:23.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.247 "is_configured": false, 00:19:23.247 "data_offset": 0, 00:19:23.247 "data_size": 0 00:19:23.247 }, 00:19:23.247 { 00:19:23.247 "name": "BaseBdev4", 00:19:23.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.247 "is_configured": false, 00:19:23.247 "data_offset": 0, 00:19:23.247 "data_size": 0 00:19:23.247 } 00:19:23.247 ] 00:19:23.247 }' 00:19:23.247 06:11:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.247 06:11:53 -- common/autotest_common.sh@10 -- # set +x 00:19:23.815 06:11:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:24.074 [2024-06-11 06:11:54.709963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:24.074 BaseBdev2 00:19:24.333 06:11:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:24.333 06:11:54 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:24.333 06:11:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:24.333 06:11:54 -- common/autotest_common.sh@889 -- # local i 00:19:24.333 06:11:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:24.333 06:11:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:24.333 06:11:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:24.333 06:11:54 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:24.592 [ 00:19:24.592 { 00:19:24.592 "name": "BaseBdev2", 00:19:24.592 "aliases": [ 00:19:24.592 "6d2af4b6-1d39-481b-b816-de6bf6967776" 00:19:24.592 ], 00:19:24.592 "product_name": "Malloc disk", 00:19:24.592 "block_size": 512, 00:19:24.592 "num_blocks": 65536, 00:19:24.592 "uuid": "6d2af4b6-1d39-481b-b816-de6bf6967776", 00:19:24.592 "assigned_rate_limits": { 00:19:24.592 "rw_ios_per_sec": 0, 00:19:24.592 "rw_mbytes_per_sec": 0, 00:19:24.592 "r_mbytes_per_sec": 0, 00:19:24.592 "w_mbytes_per_sec": 0 00:19:24.592 }, 00:19:24.592 "claimed": true, 00:19:24.592 "claim_type": "exclusive_write", 00:19:24.592 "zoned": false, 00:19:24.592 "supported_io_types": { 00:19:24.592 "read": true, 00:19:24.592 "write": true, 00:19:24.592 "unmap": true, 00:19:24.592 "write_zeroes": true, 00:19:24.592 "flush": true, 00:19:24.592 "reset": true, 00:19:24.592 "compare": false, 00:19:24.592 "compare_and_write": false, 00:19:24.592 "abort": true, 00:19:24.592 "nvme_admin": false, 00:19:24.592 "nvme_io": false 00:19:24.592 }, 00:19:24.593 "memory_domains": [ 00:19:24.593 { 00:19:24.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.593 "dma_device_type": 2 00:19:24.593 } 00:19:24.593 ], 00:19:24.593 "driver_specific": {} 00:19:24.593 } 00:19:24.593 ] 00:19:24.593 06:11:55 -- common/autotest_common.sh@895 -- # return 0 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.593 06:11:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:24.852 06:11:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.852 "name": "Existed_Raid", 00:19:24.852 "uuid": "0828c919-3d30-4ec2-94c8-89140435163a", 00:19:24.852 "strip_size_kb": 64, 00:19:24.852 "state": "configuring", 00:19:24.852 "raid_level": "concat", 00:19:24.852 "superblock": true, 00:19:24.852 "num_base_bdevs": 4, 00:19:24.852 "num_base_bdevs_discovered": 2, 00:19:24.852 "num_base_bdevs_operational": 4, 00:19:24.852 "base_bdevs_list": [ 00:19:24.852 { 00:19:24.852 "name": "BaseBdev1", 00:19:24.852 "uuid": "793f80be-0d29-4b47-b65d-0048307d5a3b", 00:19:24.852 "is_configured": true, 00:19:24.852 "data_offset": 2048, 00:19:24.852 "data_size": 63488 00:19:24.852 }, 00:19:24.852 { 00:19:24.852 "name": "BaseBdev2", 00:19:24.852 "uuid": "6d2af4b6-1d39-481b-b816-de6bf6967776", 00:19:24.852 "is_configured": true, 00:19:24.852 "data_offset": 2048, 00:19:24.852 "data_size": 63488 00:19:24.852 }, 00:19:24.852 { 00:19:24.852 "name": "BaseBdev3", 00:19:24.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.852 "is_configured": false, 00:19:24.852 "data_offset": 0, 00:19:24.852 "data_size": 0 00:19:24.852 }, 00:19:24.852 { 00:19:24.852 "name": "BaseBdev4", 00:19:24.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.852 "is_configured": false, 00:19:24.852 "data_offset": 0, 00:19:24.852 "data_size": 0 00:19:24.852 } 00:19:24.852 ] 00:19:24.852 }' 00:19:24.852 06:11:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.852 06:11:55 -- common/autotest_common.sh@10 -- # set +x 00:19:25.421 06:11:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:25.421 [2024-06-11 06:11:56.062681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:25.421 BaseBdev3 00:19:25.680 06:11:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:25.680 06:11:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:25.680 06:11:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:25.680 06:11:56 -- common/autotest_common.sh@889 -- # local i 00:19:25.680 06:11:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:25.680 06:11:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:25.680 06:11:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.939 06:11:56 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:25.939 [ 00:19:25.939 { 00:19:25.939 "name": "BaseBdev3", 00:19:25.939 "aliases": [ 00:19:25.939 "0d81ae41-4b01-4535-afc7-489185b61267" 00:19:25.939 ], 00:19:25.939 "product_name": "Malloc disk", 00:19:25.939 "block_size": 512, 00:19:25.939 "num_blocks": 65536, 00:19:25.939 "uuid": "0d81ae41-4b01-4535-afc7-489185b61267", 00:19:25.939 "assigned_rate_limits": { 00:19:25.939 "rw_ios_per_sec": 0, 00:19:25.939 "rw_mbytes_per_sec": 0, 00:19:25.939 "r_mbytes_per_sec": 0, 00:19:25.939 "w_mbytes_per_sec": 0 00:19:25.939 }, 00:19:25.939 "claimed": true, 00:19:25.939 "claim_type": "exclusive_write", 00:19:25.939 "zoned": false, 00:19:25.939 "supported_io_types": { 00:19:25.939 "read": true, 00:19:25.939 "write": true, 00:19:25.939 "unmap": true, 00:19:25.939 "write_zeroes": true, 00:19:25.939 "flush": true, 00:19:25.939 "reset": true, 00:19:25.939 "compare": false, 00:19:25.939 "compare_and_write": false, 00:19:25.939 "abort": true, 00:19:25.939 "nvme_admin": false, 00:19:25.939 "nvme_io": false 00:19:25.939 }, 00:19:25.939 "memory_domains": [ 00:19:25.939 { 00:19:25.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.939 "dma_device_type": 2 00:19:25.939 } 00:19:25.939 ], 00:19:25.939 "driver_specific": {} 00:19:25.939 } 00:19:25.939 ] 00:19:26.199 06:11:56 -- common/autotest_common.sh@895 -- # return 0 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.199 "name": "Existed_Raid", 00:19:26.199 "uuid": "0828c919-3d30-4ec2-94c8-89140435163a", 00:19:26.199 "strip_size_kb": 64, 00:19:26.199 "state": "configuring", 00:19:26.199 "raid_level": "concat", 00:19:26.199 "superblock": true, 00:19:26.199 "num_base_bdevs": 4, 00:19:26.199 "num_base_bdevs_discovered": 3, 00:19:26.199 "num_base_bdevs_operational": 4, 00:19:26.199 "base_bdevs_list": [ 00:19:26.199 { 00:19:26.199 "name": "BaseBdev1", 00:19:26.199 "uuid": "793f80be-0d29-4b47-b65d-0048307d5a3b", 00:19:26.199 "is_configured": true, 00:19:26.199 "data_offset": 2048, 00:19:26.199 "data_size": 63488 00:19:26.199 }, 00:19:26.199 { 00:19:26.199 "name": "BaseBdev2", 00:19:26.199 "uuid": "6d2af4b6-1d39-481b-b816-de6bf6967776", 00:19:26.199 "is_configured": true, 00:19:26.199 "data_offset": 2048, 00:19:26.199 "data_size": 63488 00:19:26.199 }, 00:19:26.199 { 00:19:26.199 "name": "BaseBdev3", 00:19:26.199 "uuid": "0d81ae41-4b01-4535-afc7-489185b61267", 00:19:26.199 "is_configured": true, 00:19:26.199 "data_offset": 2048, 00:19:26.199 "data_size": 63488 00:19:26.199 }, 00:19:26.199 { 00:19:26.199 "name": "BaseBdev4", 00:19:26.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.199 "is_configured": false, 00:19:26.199 "data_offset": 0, 00:19:26.199 "data_size": 0 00:19:26.199 } 00:19:26.199 ] 00:19:26.199 }' 00:19:26.199 06:11:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.199 06:11:56 -- common/autotest_common.sh@10 -- # set +x 00:19:26.767 06:11:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:27.026 [2024-06-11 06:11:57.627672] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:27.026 [2024-06-11 06:11:57.628376] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:27.026 [2024-06-11 06:11:57.628584] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:27.026 [2024-06-11 06:11:57.628913] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:27.026 [2024-06-11 06:11:57.629483] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:27.026 [2024-06-11 06:11:57.629689] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:27.026 BaseBdev4 00:19:27.026 [2024-06-11 06:11:57.630166] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.026 06:11:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:27.026 06:11:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:27.026 06:11:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:27.026 06:11:57 -- common/autotest_common.sh@889 -- # local i 00:19:27.026 06:11:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:27.026 06:11:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:27.026 06:11:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:27.285 06:11:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:27.545 [ 00:19:27.545 { 00:19:27.545 "name": "BaseBdev4", 00:19:27.545 "aliases": [ 00:19:27.545 "b6d973f4-8c4a-4a15-99bc-67110a34f4ed" 00:19:27.545 ], 00:19:27.545 "product_name": "Malloc disk", 00:19:27.545 "block_size": 512, 00:19:27.545 "num_blocks": 65536, 00:19:27.545 "uuid": "b6d973f4-8c4a-4a15-99bc-67110a34f4ed", 00:19:27.545 "assigned_rate_limits": { 00:19:27.545 "rw_ios_per_sec": 0, 00:19:27.545 "rw_mbytes_per_sec": 0, 00:19:27.545 "r_mbytes_per_sec": 0, 00:19:27.545 "w_mbytes_per_sec": 0 00:19:27.545 }, 00:19:27.545 "claimed": true, 00:19:27.545 "claim_type": "exclusive_write", 00:19:27.545 "zoned": false, 00:19:27.545 "supported_io_types": { 00:19:27.545 "read": true, 00:19:27.545 "write": true, 00:19:27.545 "unmap": true, 00:19:27.545 "write_zeroes": true, 00:19:27.545 "flush": true, 00:19:27.545 "reset": true, 00:19:27.545 "compare": false, 00:19:27.545 "compare_and_write": false, 00:19:27.545 "abort": true, 00:19:27.545 "nvme_admin": false, 00:19:27.545 "nvme_io": false 00:19:27.545 }, 00:19:27.545 "memory_domains": [ 00:19:27.545 { 00:19:27.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:27.545 "dma_device_type": 2 00:19:27.545 } 00:19:27.545 ], 00:19:27.545 "driver_specific": {} 00:19:27.545 } 00:19:27.545 ] 00:19:27.545 06:11:57 -- common/autotest_common.sh@895 -- # return 0 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.545 06:11:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.545 06:11:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:27.545 "name": "Existed_Raid", 00:19:27.545 "uuid": "0828c919-3d30-4ec2-94c8-89140435163a", 00:19:27.545 "strip_size_kb": 64, 00:19:27.545 "state": "online", 00:19:27.545 "raid_level": "concat", 00:19:27.545 "superblock": true, 00:19:27.545 "num_base_bdevs": 4, 00:19:27.545 "num_base_bdevs_discovered": 4, 00:19:27.545 "num_base_bdevs_operational": 4, 00:19:27.545 "base_bdevs_list": [ 00:19:27.545 { 00:19:27.545 "name": "BaseBdev1", 00:19:27.545 "uuid": "793f80be-0d29-4b47-b65d-0048307d5a3b", 00:19:27.545 "is_configured": true, 00:19:27.545 "data_offset": 2048, 00:19:27.545 "data_size": 63488 00:19:27.545 }, 00:19:27.545 { 00:19:27.545 "name": "BaseBdev2", 00:19:27.545 "uuid": "6d2af4b6-1d39-481b-b816-de6bf6967776", 00:19:27.545 "is_configured": true, 00:19:27.545 "data_offset": 2048, 00:19:27.545 "data_size": 63488 00:19:27.545 }, 00:19:27.545 { 00:19:27.545 "name": "BaseBdev3", 00:19:27.545 "uuid": "0d81ae41-4b01-4535-afc7-489185b61267", 00:19:27.545 "is_configured": true, 00:19:27.545 "data_offset": 2048, 00:19:27.545 "data_size": 63488 00:19:27.545 }, 00:19:27.545 { 00:19:27.545 "name": "BaseBdev4", 00:19:27.545 "uuid": "b6d973f4-8c4a-4a15-99bc-67110a34f4ed", 00:19:27.545 "is_configured": true, 00:19:27.545 "data_offset": 2048, 00:19:27.545 "data_size": 63488 00:19:27.545 } 00:19:27.545 ] 00:19:27.545 }' 00:19:27.545 06:11:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:27.545 06:11:58 -- common/autotest_common.sh@10 -- # set +x 00:19:28.482 06:11:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:28.482 [2024-06-11 06:11:58.937491] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:28.482 [2024-06-11 06:11:58.937726] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:28.482 [2024-06-11 06:11:58.937918] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.482 06:11:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.741 06:11:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.741 "name": "Existed_Raid", 00:19:28.741 "uuid": "0828c919-3d30-4ec2-94c8-89140435163a", 00:19:28.741 "strip_size_kb": 64, 00:19:28.741 "state": "offline", 00:19:28.741 "raid_level": "concat", 00:19:28.741 "superblock": true, 00:19:28.741 "num_base_bdevs": 4, 00:19:28.741 "num_base_bdevs_discovered": 3, 00:19:28.741 "num_base_bdevs_operational": 3, 00:19:28.741 "base_bdevs_list": [ 00:19:28.741 { 00:19:28.741 "name": null, 00:19:28.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.741 "is_configured": false, 00:19:28.741 "data_offset": 2048, 00:19:28.741 "data_size": 63488 00:19:28.741 }, 00:19:28.741 { 00:19:28.741 "name": "BaseBdev2", 00:19:28.741 "uuid": "6d2af4b6-1d39-481b-b816-de6bf6967776", 00:19:28.741 "is_configured": true, 00:19:28.741 "data_offset": 2048, 00:19:28.741 "data_size": 63488 00:19:28.741 }, 00:19:28.741 { 00:19:28.741 "name": "BaseBdev3", 00:19:28.741 "uuid": "0d81ae41-4b01-4535-afc7-489185b61267", 00:19:28.741 "is_configured": true, 00:19:28.741 "data_offset": 2048, 00:19:28.741 "data_size": 63488 00:19:28.741 }, 00:19:28.741 { 00:19:28.741 "name": "BaseBdev4", 00:19:28.741 "uuid": "b6d973f4-8c4a-4a15-99bc-67110a34f4ed", 00:19:28.741 "is_configured": true, 00:19:28.741 "data_offset": 2048, 00:19:28.741 "data_size": 63488 00:19:28.741 } 00:19:28.741 ] 00:19:28.741 }' 00:19:28.741 06:11:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.741 06:11:59 -- common/autotest_common.sh@10 -- # set +x 00:19:29.307 06:11:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:29.307 06:11:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:29.307 06:11:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.307 06:11:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:29.566 06:12:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:29.566 06:12:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:29.566 06:12:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:29.824 [2024-06-11 06:12:00.216339] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:29.824 06:12:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:29.824 06:12:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:29.824 06:12:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.824 06:12:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:30.082 06:12:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:30.082 06:12:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.082 06:12:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:30.339 [2024-06-11 06:12:00.750366] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:30.339 06:12:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:30.339 06:12:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:30.339 06:12:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.339 06:12:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:30.597 06:12:01 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:30.597 06:12:01 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:30.597 06:12:01 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:30.853 [2024-06-11 06:12:01.275276] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:30.853 [2024-06-11 06:12:01.275485] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:30.853 06:12:01 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:30.853 06:12:01 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:30.854 06:12:01 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.854 06:12:01 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:31.111 06:12:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:31.111 06:12:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:31.111 06:12:01 -- bdev/bdev_raid.sh@287 -- # killprocess 120720 00:19:31.111 06:12:01 -- common/autotest_common.sh@926 -- # '[' -z 120720 ']' 00:19:31.111 06:12:01 -- common/autotest_common.sh@930 -- # kill -0 120720 00:19:31.111 06:12:01 -- common/autotest_common.sh@931 -- # uname 00:19:31.111 06:12:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:31.111 06:12:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120720 00:19:31.111 06:12:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:31.111 06:12:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:31.111 06:12:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120720' 00:19:31.111 killing process with pid 120720 00:19:31.111 06:12:01 -- common/autotest_common.sh@945 -- # kill 120720 00:19:31.112 [2024-06-11 06:12:01.686991] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.112 06:12:01 -- common/autotest_common.sh@950 -- # wait 120720 00:19:31.112 [2024-06-11 06:12:01.687291] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.486 ************************************ 00:19:32.486 END TEST raid_state_function_test_sb 00:19:32.486 ************************************ 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:32.486 00:19:32.486 real 0m14.488s 00:19:32.486 user 0m24.307s 00:19:32.486 sys 0m2.434s 00:19:32.486 06:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:32.486 06:12:03 -- common/autotest_common.sh@10 -- # set +x 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:32.486 06:12:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:32.486 06:12:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:32.486 06:12:03 -- common/autotest_common.sh@10 -- # set +x 00:19:32.486 ************************************ 00:19:32.486 START TEST raid_superblock_test 00:19:32.486 ************************************ 00:19:32.486 06:12:03 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:32.486 06:12:03 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@357 -- # raid_pid=121164 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:32.745 06:12:03 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121164 /var/tmp/spdk-raid.sock 00:19:32.745 06:12:03 -- common/autotest_common.sh@819 -- # '[' -z 121164 ']' 00:19:32.745 06:12:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:32.745 06:12:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:32.745 06:12:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:32.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:32.745 06:12:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:32.745 06:12:03 -- common/autotest_common.sh@10 -- # set +x 00:19:32.745 [2024-06-11 06:12:03.214555] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:32.745 [2024-06-11 06:12:03.214944] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121164 ] 00:19:33.003 [2024-06-11 06:12:03.402844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.262 [2024-06-11 06:12:03.691814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.520 [2024-06-11 06:12:03.936308] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.453 06:12:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:34.453 06:12:04 -- common/autotest_common.sh@852 -- # return 0 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:34.453 malloc1 00:19:34.453 06:12:04 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.711 [2024-06-11 06:12:05.176902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.711 [2024-06-11 06:12:05.177190] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.711 [2024-06-11 06:12:05.177260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:19:34.711 [2024-06-11 06:12:05.177384] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.711 [2024-06-11 06:12:05.180154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.711 [2024-06-11 06:12:05.180320] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.711 pt1 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:34.711 06:12:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:34.969 malloc2 00:19:34.969 06:12:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:34.969 [2024-06-11 06:12:05.605643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:34.969 [2024-06-11 06:12:05.605905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.969 [2024-06-11 06:12:05.605996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:19:34.969 [2024-06-11 06:12:05.606133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.969 [2024-06-11 06:12:05.608759] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.969 [2024-06-11 06:12:05.608929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:34.969 pt2 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:35.228 malloc3 00:19:35.228 06:12:05 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:35.487 [2024-06-11 06:12:06.071796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:35.487 [2024-06-11 06:12:06.072081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.487 [2024-06-11 06:12:06.072165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:19:35.487 [2024-06-11 06:12:06.072292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.487 [2024-06-11 06:12:06.074967] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.487 [2024-06-11 06:12:06.075127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:35.487 pt3 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:35.487 06:12:06 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:35.746 malloc4 00:19:35.746 06:12:06 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:36.007 [2024-06-11 06:12:06.543306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:36.007 [2024-06-11 06:12:06.543594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.007 [2024-06-11 06:12:06.543666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:36.007 [2024-06-11 06:12:06.543787] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.007 [2024-06-11 06:12:06.546532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.007 [2024-06-11 06:12:06.546688] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:36.007 pt4 00:19:36.007 06:12:06 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:36.007 06:12:06 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:36.007 06:12:06 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:36.330 [2024-06-11 06:12:06.719604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:36.330 [2024-06-11 06:12:06.722020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.330 [2024-06-11 06:12:06.722212] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:36.330 [2024-06-11 06:12:06.722318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:36.330 [2024-06-11 06:12:06.722635] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:36.330 [2024-06-11 06:12:06.722737] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:36.330 [2024-06-11 06:12:06.722917] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:36.330 [2024-06-11 06:12:06.723318] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:36.331 [2024-06-11 06:12:06.723356] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:36.331 [2024-06-11 06:12:06.723622] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.331 "name": "raid_bdev1", 00:19:36.331 "uuid": "f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e", 00:19:36.331 "strip_size_kb": 64, 00:19:36.331 "state": "online", 00:19:36.331 "raid_level": "concat", 00:19:36.331 "superblock": true, 00:19:36.331 "num_base_bdevs": 4, 00:19:36.331 "num_base_bdevs_discovered": 4, 00:19:36.331 "num_base_bdevs_operational": 4, 00:19:36.331 "base_bdevs_list": [ 00:19:36.331 { 00:19:36.331 "name": "pt1", 00:19:36.331 "uuid": "4e205dc8-de0c-5ea3-8740-246452c4a5bf", 00:19:36.331 "is_configured": true, 00:19:36.331 "data_offset": 2048, 00:19:36.331 "data_size": 63488 00:19:36.331 }, 00:19:36.331 { 00:19:36.331 "name": "pt2", 00:19:36.331 "uuid": "d1747c0a-1ab3-514f-aeb4-47b42cb47b59", 00:19:36.331 "is_configured": true, 00:19:36.331 "data_offset": 2048, 00:19:36.331 "data_size": 63488 00:19:36.331 }, 00:19:36.331 { 00:19:36.331 "name": "pt3", 00:19:36.331 "uuid": "80f9116c-8f5f-508c-83a1-584c9f8e536f", 00:19:36.331 "is_configured": true, 00:19:36.331 "data_offset": 2048, 00:19:36.331 "data_size": 63488 00:19:36.331 }, 00:19:36.331 { 00:19:36.331 "name": "pt4", 00:19:36.331 "uuid": "b5429996-6db0-50ce-994f-e3a38988101c", 00:19:36.331 "is_configured": true, 00:19:36.331 "data_offset": 2048, 00:19:36.331 "data_size": 63488 00:19:36.331 } 00:19:36.331 ] 00:19:36.331 }' 00:19:36.331 06:12:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.331 06:12:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.897 06:12:07 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:36.897 06:12:07 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:37.156 [2024-06-11 06:12:07.767942] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.156 06:12:07 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e 00:19:37.156 06:12:07 -- bdev/bdev_raid.sh@380 -- # '[' -z f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e ']' 00:19:37.156 06:12:07 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:37.414 [2024-06-11 06:12:07.999801] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.414 [2024-06-11 06:12:07.999889] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:37.414 [2024-06-11 06:12:08.000018] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.414 [2024-06-11 06:12:08.000118] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.414 [2024-06-11 06:12:08.000148] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:37.414 06:12:08 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.414 06:12:08 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:37.672 06:12:08 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:37.672 06:12:08 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:37.672 06:12:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.672 06:12:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:37.931 06:12:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:37.931 06:12:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:38.190 06:12:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.190 06:12:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:38.190 06:12:08 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:38.190 06:12:08 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:38.448 06:12:08 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:38.448 06:12:08 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:38.707 06:12:09 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:38.707 06:12:09 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:38.708 06:12:09 -- common/autotest_common.sh@640 -- # local es=0 00:19:38.708 06:12:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:38.708 06:12:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.708 06:12:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:38.708 06:12:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.708 06:12:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:38.708 06:12:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.708 06:12:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:38.708 06:12:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.708 06:12:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:38.708 06:12:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:38.966 [2024-06-11 06:12:09.428010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:38.966 [2024-06-11 06:12:09.430439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:38.966 [2024-06-11 06:12:09.430617] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:38.966 [2024-06-11 06:12:09.430691] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:38.966 [2024-06-11 06:12:09.430822] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:38.966 [2024-06-11 06:12:09.430933] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:38.966 [2024-06-11 06:12:09.431059] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:38.966 [2024-06-11 06:12:09.431146] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:38.966 [2024-06-11 06:12:09.431192] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.966 [2024-06-11 06:12:09.431349] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:19:38.967 request: 00:19:38.967 { 00:19:38.967 "name": "raid_bdev1", 00:19:38.967 "raid_level": "concat", 00:19:38.967 "base_bdevs": [ 00:19:38.967 "malloc1", 00:19:38.967 "malloc2", 00:19:38.967 "malloc3", 00:19:38.967 "malloc4" 00:19:38.967 ], 00:19:38.967 "superblock": false, 00:19:38.967 "strip_size_kb": 64, 00:19:38.967 "method": "bdev_raid_create", 00:19:38.967 "req_id": 1 00:19:38.967 } 00:19:38.967 Got JSON-RPC error response 00:19:38.967 response: 00:19:38.967 { 00:19:38.967 "code": -17, 00:19:38.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:38.967 } 00:19:38.967 06:12:09 -- common/autotest_common.sh@643 -- # es=1 00:19:38.967 06:12:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:38.967 06:12:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:38.967 06:12:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:38.967 06:12:09 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.967 06:12:09 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:39.225 06:12:09 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:39.225 06:12:09 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:39.225 06:12:09 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:39.225 [2024-06-11 06:12:09.776067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:39.225 [2024-06-11 06:12:09.776363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.225 [2024-06-11 06:12:09.776433] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:39.226 [2024-06-11 06:12:09.776526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.226 [2024-06-11 06:12:09.779266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.226 [2024-06-11 06:12:09.779450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:39.226 [2024-06-11 06:12:09.779675] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:39.226 [2024-06-11 06:12:09.779762] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:39.226 pt1 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.226 06:12:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:39.484 06:12:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:39.484 "name": "raid_bdev1", 00:19:39.484 "uuid": "f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e", 00:19:39.484 "strip_size_kb": 64, 00:19:39.484 "state": "configuring", 00:19:39.484 "raid_level": "concat", 00:19:39.484 "superblock": true, 00:19:39.484 "num_base_bdevs": 4, 00:19:39.484 "num_base_bdevs_discovered": 1, 00:19:39.484 "num_base_bdevs_operational": 4, 00:19:39.484 "base_bdevs_list": [ 00:19:39.484 { 00:19:39.484 "name": "pt1", 00:19:39.484 "uuid": "4e205dc8-de0c-5ea3-8740-246452c4a5bf", 00:19:39.484 "is_configured": true, 00:19:39.484 "data_offset": 2048, 00:19:39.484 "data_size": 63488 00:19:39.484 }, 00:19:39.484 { 00:19:39.484 "name": null, 00:19:39.484 "uuid": "d1747c0a-1ab3-514f-aeb4-47b42cb47b59", 00:19:39.484 "is_configured": false, 00:19:39.484 "data_offset": 2048, 00:19:39.484 "data_size": 63488 00:19:39.484 }, 00:19:39.484 { 00:19:39.484 "name": null, 00:19:39.484 "uuid": "80f9116c-8f5f-508c-83a1-584c9f8e536f", 00:19:39.484 "is_configured": false, 00:19:39.484 "data_offset": 2048, 00:19:39.484 "data_size": 63488 00:19:39.484 }, 00:19:39.484 { 00:19:39.484 "name": null, 00:19:39.484 "uuid": "b5429996-6db0-50ce-994f-e3a38988101c", 00:19:39.484 "is_configured": false, 00:19:39.484 "data_offset": 2048, 00:19:39.484 "data_size": 63488 00:19:39.484 } 00:19:39.484 ] 00:19:39.484 }' 00:19:39.484 06:12:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:39.484 06:12:09 -- common/autotest_common.sh@10 -- # set +x 00:19:40.051 06:12:10 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:40.051 06:12:10 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.309 [2024-06-11 06:12:10.708331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.309 [2024-06-11 06:12:10.708599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.309 [2024-06-11 06:12:10.708678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:40.309 [2024-06-11 06:12:10.708770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.309 [2024-06-11 06:12:10.709409] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.309 [2024-06-11 06:12:10.709565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.309 [2024-06-11 06:12:10.709772] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:40.309 [2024-06-11 06:12:10.709862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.309 pt2 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:40.309 [2024-06-11 06:12:10.884373] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.309 06:12:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.567 06:12:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:40.567 "name": "raid_bdev1", 00:19:40.567 "uuid": "f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e", 00:19:40.567 "strip_size_kb": 64, 00:19:40.567 "state": "configuring", 00:19:40.567 "raid_level": "concat", 00:19:40.567 "superblock": true, 00:19:40.567 "num_base_bdevs": 4, 00:19:40.567 "num_base_bdevs_discovered": 1, 00:19:40.567 "num_base_bdevs_operational": 4, 00:19:40.567 "base_bdevs_list": [ 00:19:40.567 { 00:19:40.567 "name": "pt1", 00:19:40.567 "uuid": "4e205dc8-de0c-5ea3-8740-246452c4a5bf", 00:19:40.567 "is_configured": true, 00:19:40.567 "data_offset": 2048, 00:19:40.567 "data_size": 63488 00:19:40.567 }, 00:19:40.567 { 00:19:40.567 "name": null, 00:19:40.567 "uuid": "d1747c0a-1ab3-514f-aeb4-47b42cb47b59", 00:19:40.567 "is_configured": false, 00:19:40.568 "data_offset": 2048, 00:19:40.568 "data_size": 63488 00:19:40.568 }, 00:19:40.568 { 00:19:40.568 "name": null, 00:19:40.568 "uuid": "80f9116c-8f5f-508c-83a1-584c9f8e536f", 00:19:40.568 "is_configured": false, 00:19:40.568 "data_offset": 2048, 00:19:40.568 "data_size": 63488 00:19:40.568 }, 00:19:40.568 { 00:19:40.568 "name": null, 00:19:40.568 "uuid": "b5429996-6db0-50ce-994f-e3a38988101c", 00:19:40.568 "is_configured": false, 00:19:40.568 "data_offset": 2048, 00:19:40.568 "data_size": 63488 00:19:40.568 } 00:19:40.568 ] 00:19:40.568 }' 00:19:40.568 06:12:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:40.568 06:12:11 -- common/autotest_common.sh@10 -- # set +x 00:19:41.135 06:12:11 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:41.135 06:12:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:41.135 06:12:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:41.393 [2024-06-11 06:12:11.840556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:41.393 [2024-06-11 06:12:11.840845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.393 [2024-06-11 06:12:11.840923] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:41.393 [2024-06-11 06:12:11.841023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.393 [2024-06-11 06:12:11.841589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.393 [2024-06-11 06:12:11.841757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:41.393 [2024-06-11 06:12:11.841987] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:41.393 [2024-06-11 06:12:11.842098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:41.393 pt2 00:19:41.393 06:12:11 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:41.393 06:12:11 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:41.393 06:12:11 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:41.654 [2024-06-11 06:12:12.088586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:41.654 [2024-06-11 06:12:12.088854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.654 [2024-06-11 06:12:12.088920] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:41.654 [2024-06-11 06:12:12.089023] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.654 [2024-06-11 06:12:12.089557] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.654 [2024-06-11 06:12:12.089728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:41.654 [2024-06-11 06:12:12.089917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:41.654 [2024-06-11 06:12:12.090039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:41.654 pt3 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:41.654 [2024-06-11 06:12:12.252623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:41.654 [2024-06-11 06:12:12.252889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.654 [2024-06-11 06:12:12.252963] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:41.654 [2024-06-11 06:12:12.253072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.654 [2024-06-11 06:12:12.253559] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.654 [2024-06-11 06:12:12.253720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:41.654 [2024-06-11 06:12:12.253912] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:41.654 [2024-06-11 06:12:12.253999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:41.654 [2024-06-11 06:12:12.254239] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:19:41.654 [2024-06-11 06:12:12.254344] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:41.654 [2024-06-11 06:12:12.254480] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:41.654 [2024-06-11 06:12:12.254874] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:19:41.654 [2024-06-11 06:12:12.254911] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:19:41.654 [2024-06-11 06:12:12.255110] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.654 pt4 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.654 06:12:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.911 06:12:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.911 "name": "raid_bdev1", 00:19:41.911 "uuid": "f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e", 00:19:41.911 "strip_size_kb": 64, 00:19:41.911 "state": "online", 00:19:41.911 "raid_level": "concat", 00:19:41.911 "superblock": true, 00:19:41.911 "num_base_bdevs": 4, 00:19:41.911 "num_base_bdevs_discovered": 4, 00:19:41.911 "num_base_bdevs_operational": 4, 00:19:41.911 "base_bdevs_list": [ 00:19:41.911 { 00:19:41.911 "name": "pt1", 00:19:41.911 "uuid": "4e205dc8-de0c-5ea3-8740-246452c4a5bf", 00:19:41.911 "is_configured": true, 00:19:41.911 "data_offset": 2048, 00:19:41.911 "data_size": 63488 00:19:41.911 }, 00:19:41.911 { 00:19:41.911 "name": "pt2", 00:19:41.911 "uuid": "d1747c0a-1ab3-514f-aeb4-47b42cb47b59", 00:19:41.911 "is_configured": true, 00:19:41.911 "data_offset": 2048, 00:19:41.911 "data_size": 63488 00:19:41.911 }, 00:19:41.911 { 00:19:41.911 "name": "pt3", 00:19:41.911 "uuid": "80f9116c-8f5f-508c-83a1-584c9f8e536f", 00:19:41.911 "is_configured": true, 00:19:41.911 "data_offset": 2048, 00:19:41.911 "data_size": 63488 00:19:41.911 }, 00:19:41.911 { 00:19:41.911 "name": "pt4", 00:19:41.911 "uuid": "b5429996-6db0-50ce-994f-e3a38988101c", 00:19:41.911 "is_configured": true, 00:19:41.911 "data_offset": 2048, 00:19:41.911 "data_size": 63488 00:19:41.912 } 00:19:41.912 ] 00:19:41.912 }' 00:19:41.912 06:12:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.912 06:12:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.478 06:12:13 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:42.478 06:12:13 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:42.737 [2024-06-11 06:12:13.313044] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.737 06:12:13 -- bdev/bdev_raid.sh@430 -- # '[' f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e '!=' f1688aaa-6fa3-4b8d-aa44-e2ccb5fcf84e ']' 00:19:42.737 06:12:13 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:42.737 06:12:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:42.737 06:12:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:42.737 06:12:13 -- bdev/bdev_raid.sh@511 -- # killprocess 121164 00:19:42.737 06:12:13 -- common/autotest_common.sh@926 -- # '[' -z 121164 ']' 00:19:42.737 06:12:13 -- common/autotest_common.sh@930 -- # kill -0 121164 00:19:42.737 06:12:13 -- common/autotest_common.sh@931 -- # uname 00:19:42.737 06:12:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:42.737 06:12:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121164 00:19:42.737 06:12:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:42.737 06:12:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:42.737 killing process with pid 121164 00:19:42.737 06:12:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121164' 00:19:42.737 06:12:13 -- common/autotest_common.sh@945 -- # kill 121164 00:19:42.737 [2024-06-11 06:12:13.363563] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.737 [2024-06-11 06:12:13.363644] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.737 [2024-06-11 06:12:13.363716] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.737 [2024-06-11 06:12:13.363725] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:19:42.737 06:12:13 -- common/autotest_common.sh@950 -- # wait 121164 00:19:43.304 [2024-06-11 06:12:13.770684] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.682 ************************************ 00:19:44.682 END TEST raid_superblock_test 00:19:44.682 ************************************ 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:44.682 00:19:44.682 real 0m11.996s 00:19:44.682 user 0m19.253s 00:19:44.682 sys 0m1.967s 00:19:44.682 06:12:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.682 06:12:15 -- common/autotest_common.sh@10 -- # set +x 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:44.682 06:12:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:44.682 06:12:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.682 06:12:15 -- common/autotest_common.sh@10 -- # set +x 00:19:44.682 ************************************ 00:19:44.682 START TEST raid_state_function_test 00:19:44.682 ************************************ 00:19:44.682 06:12:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=121499 00:19:44.682 Process raid pid: 121499 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121499' 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121499 /var/tmp/spdk-raid.sock 00:19:44.682 06:12:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:44.682 06:12:15 -- common/autotest_common.sh@819 -- # '[' -z 121499 ']' 00:19:44.682 06:12:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:44.682 06:12:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:44.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:44.682 06:12:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:44.682 06:12:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:44.682 06:12:15 -- common/autotest_common.sh@10 -- # set +x 00:19:44.682 [2024-06-11 06:12:15.284097] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:44.682 [2024-06-11 06:12:15.284972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.941 [2024-06-11 06:12:15.469807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.200 [2024-06-11 06:12:15.709333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.458 [2024-06-11 06:12:15.952082] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.717 06:12:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:45.717 06:12:16 -- common/autotest_common.sh@852 -- # return 0 00:19:45.717 06:12:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:45.975 [2024-06-11 06:12:16.424703] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:45.975 [2024-06-11 06:12:16.424810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:45.975 [2024-06-11 06:12:16.424822] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:45.975 [2024-06-11 06:12:16.424844] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:45.976 [2024-06-11 06:12:16.424851] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:45.976 [2024-06-11 06:12:16.424888] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:45.976 [2024-06-11 06:12:16.424895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:45.976 [2024-06-11 06:12:16.424917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.976 06:12:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.234 06:12:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.234 "name": "Existed_Raid", 00:19:46.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.234 "strip_size_kb": 0, 00:19:46.234 "state": "configuring", 00:19:46.234 "raid_level": "raid1", 00:19:46.234 "superblock": false, 00:19:46.234 "num_base_bdevs": 4, 00:19:46.234 "num_base_bdevs_discovered": 0, 00:19:46.234 "num_base_bdevs_operational": 4, 00:19:46.234 "base_bdevs_list": [ 00:19:46.234 { 00:19:46.234 "name": "BaseBdev1", 00:19:46.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.234 "is_configured": false, 00:19:46.234 "data_offset": 0, 00:19:46.234 "data_size": 0 00:19:46.234 }, 00:19:46.234 { 00:19:46.234 "name": "BaseBdev2", 00:19:46.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.234 "is_configured": false, 00:19:46.234 "data_offset": 0, 00:19:46.234 "data_size": 0 00:19:46.234 }, 00:19:46.234 { 00:19:46.234 "name": "BaseBdev3", 00:19:46.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.235 "is_configured": false, 00:19:46.235 "data_offset": 0, 00:19:46.235 "data_size": 0 00:19:46.235 }, 00:19:46.235 { 00:19:46.235 "name": "BaseBdev4", 00:19:46.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.235 "is_configured": false, 00:19:46.235 "data_offset": 0, 00:19:46.235 "data_size": 0 00:19:46.235 } 00:19:46.235 ] 00:19:46.235 }' 00:19:46.235 06:12:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.235 06:12:16 -- common/autotest_common.sh@10 -- # set +x 00:19:46.802 06:12:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:46.802 [2024-06-11 06:12:17.304700] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:46.802 [2024-06-11 06:12:17.304740] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:46.802 06:12:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:47.060 [2024-06-11 06:12:17.484776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:47.061 [2024-06-11 06:12:17.484854] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:47.061 [2024-06-11 06:12:17.484865] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:47.061 [2024-06-11 06:12:17.484906] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:47.061 [2024-06-11 06:12:17.484914] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:47.061 [2024-06-11 06:12:17.484951] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:47.061 [2024-06-11 06:12:17.484958] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:47.061 [2024-06-11 06:12:17.484981] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:47.061 06:12:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:47.320 [2024-06-11 06:12:17.769025] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.320 BaseBdev1 00:19:47.320 06:12:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:47.320 06:12:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:47.320 06:12:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:47.320 06:12:17 -- common/autotest_common.sh@889 -- # local i 00:19:47.320 06:12:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:47.320 06:12:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:47.320 06:12:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.578 06:12:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:47.837 [ 00:19:47.837 { 00:19:47.837 "name": "BaseBdev1", 00:19:47.837 "aliases": [ 00:19:47.837 "4c53b37b-7702-403b-9c67-1d2e5dd7226a" 00:19:47.837 ], 00:19:47.837 "product_name": "Malloc disk", 00:19:47.837 "block_size": 512, 00:19:47.837 "num_blocks": 65536, 00:19:47.837 "uuid": "4c53b37b-7702-403b-9c67-1d2e5dd7226a", 00:19:47.837 "assigned_rate_limits": { 00:19:47.837 "rw_ios_per_sec": 0, 00:19:47.837 "rw_mbytes_per_sec": 0, 00:19:47.837 "r_mbytes_per_sec": 0, 00:19:47.837 "w_mbytes_per_sec": 0 00:19:47.837 }, 00:19:47.837 "claimed": true, 00:19:47.837 "claim_type": "exclusive_write", 00:19:47.837 "zoned": false, 00:19:47.837 "supported_io_types": { 00:19:47.837 "read": true, 00:19:47.837 "write": true, 00:19:47.837 "unmap": true, 00:19:47.837 "write_zeroes": true, 00:19:47.837 "flush": true, 00:19:47.837 "reset": true, 00:19:47.837 "compare": false, 00:19:47.837 "compare_and_write": false, 00:19:47.837 "abort": true, 00:19:47.837 "nvme_admin": false, 00:19:47.837 "nvme_io": false 00:19:47.837 }, 00:19:47.837 "memory_domains": [ 00:19:47.837 { 00:19:47.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.837 "dma_device_type": 2 00:19:47.837 } 00:19:47.837 ], 00:19:47.837 "driver_specific": {} 00:19:47.837 } 00:19:47.837 ] 00:19:47.837 06:12:18 -- common/autotest_common.sh@895 -- # return 0 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.837 06:12:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:47.837 "name": "Existed_Raid", 00:19:47.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.837 "strip_size_kb": 0, 00:19:47.837 "state": "configuring", 00:19:47.837 "raid_level": "raid1", 00:19:47.837 "superblock": false, 00:19:47.837 "num_base_bdevs": 4, 00:19:47.837 "num_base_bdevs_discovered": 1, 00:19:47.837 "num_base_bdevs_operational": 4, 00:19:47.837 "base_bdevs_list": [ 00:19:47.837 { 00:19:47.837 "name": "BaseBdev1", 00:19:47.837 "uuid": "4c53b37b-7702-403b-9c67-1d2e5dd7226a", 00:19:47.837 "is_configured": true, 00:19:47.837 "data_offset": 0, 00:19:47.837 "data_size": 65536 00:19:47.837 }, 00:19:47.837 { 00:19:47.837 "name": "BaseBdev2", 00:19:47.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.837 "is_configured": false, 00:19:47.837 "data_offset": 0, 00:19:47.837 "data_size": 0 00:19:47.837 }, 00:19:47.837 { 00:19:47.837 "name": "BaseBdev3", 00:19:47.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.837 "is_configured": false, 00:19:47.837 "data_offset": 0, 00:19:47.837 "data_size": 0 00:19:47.837 }, 00:19:47.837 { 00:19:47.837 "name": "BaseBdev4", 00:19:47.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.837 "is_configured": false, 00:19:47.837 "data_offset": 0, 00:19:47.837 "data_size": 0 00:19:47.837 } 00:19:47.837 ] 00:19:47.837 }' 00:19:47.838 06:12:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:47.838 06:12:18 -- common/autotest_common.sh@10 -- # set +x 00:19:48.406 06:12:18 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:48.664 [2024-06-11 06:12:19.157271] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:48.664 [2024-06-11 06:12:19.157332] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:48.664 06:12:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:48.664 06:12:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:48.925 [2024-06-11 06:12:19.325382] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:48.925 [2024-06-11 06:12:19.327717] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:48.925 [2024-06-11 06:12:19.327802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:48.925 [2024-06-11 06:12:19.327813] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:48.925 [2024-06-11 06:12:19.327839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:48.925 [2024-06-11 06:12:19.327847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:48.925 [2024-06-11 06:12:19.327864] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.925 "name": "Existed_Raid", 00:19:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.925 "strip_size_kb": 0, 00:19:48.925 "state": "configuring", 00:19:48.925 "raid_level": "raid1", 00:19:48.925 "superblock": false, 00:19:48.925 "num_base_bdevs": 4, 00:19:48.925 "num_base_bdevs_discovered": 1, 00:19:48.925 "num_base_bdevs_operational": 4, 00:19:48.925 "base_bdevs_list": [ 00:19:48.925 { 00:19:48.925 "name": "BaseBdev1", 00:19:48.925 "uuid": "4c53b37b-7702-403b-9c67-1d2e5dd7226a", 00:19:48.925 "is_configured": true, 00:19:48.925 "data_offset": 0, 00:19:48.925 "data_size": 65536 00:19:48.925 }, 00:19:48.925 { 00:19:48.925 "name": "BaseBdev2", 00:19:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.925 "is_configured": false, 00:19:48.925 "data_offset": 0, 00:19:48.925 "data_size": 0 00:19:48.925 }, 00:19:48.925 { 00:19:48.925 "name": "BaseBdev3", 00:19:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.925 "is_configured": false, 00:19:48.925 "data_offset": 0, 00:19:48.925 "data_size": 0 00:19:48.925 }, 00:19:48.925 { 00:19:48.925 "name": "BaseBdev4", 00:19:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.925 "is_configured": false, 00:19:48.925 "data_offset": 0, 00:19:48.925 "data_size": 0 00:19:48.925 } 00:19:48.925 ] 00:19:48.925 }' 00:19:48.925 06:12:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.925 06:12:19 -- common/autotest_common.sh@10 -- # set +x 00:19:49.493 06:12:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:49.752 [2024-06-11 06:12:20.270111] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.752 BaseBdev2 00:19:49.752 06:12:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:49.752 06:12:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:49.752 06:12:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:49.752 06:12:20 -- common/autotest_common.sh@889 -- # local i 00:19:49.752 06:12:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:49.752 06:12:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:49.752 06:12:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.011 06:12:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:50.270 [ 00:19:50.270 { 00:19:50.270 "name": "BaseBdev2", 00:19:50.270 "aliases": [ 00:19:50.270 "520ae1b8-0d39-4de4-8abb-8ab9698ff9c8" 00:19:50.270 ], 00:19:50.270 "product_name": "Malloc disk", 00:19:50.270 "block_size": 512, 00:19:50.270 "num_blocks": 65536, 00:19:50.270 "uuid": "520ae1b8-0d39-4de4-8abb-8ab9698ff9c8", 00:19:50.270 "assigned_rate_limits": { 00:19:50.270 "rw_ios_per_sec": 0, 00:19:50.270 "rw_mbytes_per_sec": 0, 00:19:50.270 "r_mbytes_per_sec": 0, 00:19:50.270 "w_mbytes_per_sec": 0 00:19:50.270 }, 00:19:50.270 "claimed": true, 00:19:50.270 "claim_type": "exclusive_write", 00:19:50.270 "zoned": false, 00:19:50.270 "supported_io_types": { 00:19:50.270 "read": true, 00:19:50.270 "write": true, 00:19:50.270 "unmap": true, 00:19:50.270 "write_zeroes": true, 00:19:50.270 "flush": true, 00:19:50.270 "reset": true, 00:19:50.270 "compare": false, 00:19:50.270 "compare_and_write": false, 00:19:50.270 "abort": true, 00:19:50.270 "nvme_admin": false, 00:19:50.270 "nvme_io": false 00:19:50.270 }, 00:19:50.270 "memory_domains": [ 00:19:50.270 { 00:19:50.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.270 "dma_device_type": 2 00:19:50.270 } 00:19:50.270 ], 00:19:50.270 "driver_specific": {} 00:19:50.270 } 00:19:50.270 ] 00:19:50.270 06:12:20 -- common/autotest_common.sh@895 -- # return 0 00:19:50.270 06:12:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:50.270 06:12:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.271 06:12:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.529 06:12:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.529 "name": "Existed_Raid", 00:19:50.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.529 "strip_size_kb": 0, 00:19:50.529 "state": "configuring", 00:19:50.529 "raid_level": "raid1", 00:19:50.529 "superblock": false, 00:19:50.529 "num_base_bdevs": 4, 00:19:50.529 "num_base_bdevs_discovered": 2, 00:19:50.529 "num_base_bdevs_operational": 4, 00:19:50.529 "base_bdevs_list": [ 00:19:50.529 { 00:19:50.529 "name": "BaseBdev1", 00:19:50.529 "uuid": "4c53b37b-7702-403b-9c67-1d2e5dd7226a", 00:19:50.529 "is_configured": true, 00:19:50.529 "data_offset": 0, 00:19:50.529 "data_size": 65536 00:19:50.529 }, 00:19:50.529 { 00:19:50.529 "name": "BaseBdev2", 00:19:50.529 "uuid": "520ae1b8-0d39-4de4-8abb-8ab9698ff9c8", 00:19:50.529 "is_configured": true, 00:19:50.529 "data_offset": 0, 00:19:50.529 "data_size": 65536 00:19:50.529 }, 00:19:50.529 { 00:19:50.529 "name": "BaseBdev3", 00:19:50.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.529 "is_configured": false, 00:19:50.529 "data_offset": 0, 00:19:50.529 "data_size": 0 00:19:50.529 }, 00:19:50.529 { 00:19:50.529 "name": "BaseBdev4", 00:19:50.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.529 "is_configured": false, 00:19:50.529 "data_offset": 0, 00:19:50.529 "data_size": 0 00:19:50.529 } 00:19:50.529 ] 00:19:50.529 }' 00:19:50.529 06:12:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.529 06:12:20 -- common/autotest_common.sh@10 -- # set +x 00:19:51.095 06:12:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:51.354 [2024-06-11 06:12:21.807238] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:51.354 BaseBdev3 00:19:51.354 06:12:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:51.354 06:12:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:51.354 06:12:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:51.354 06:12:21 -- common/autotest_common.sh@889 -- # local i 00:19:51.354 06:12:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:51.354 06:12:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:51.354 06:12:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:51.354 06:12:21 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:51.613 [ 00:19:51.613 { 00:19:51.613 "name": "BaseBdev3", 00:19:51.613 "aliases": [ 00:19:51.613 "db9d62b9-c457-42bd-8960-5dfdb5f3c29a" 00:19:51.613 ], 00:19:51.613 "product_name": "Malloc disk", 00:19:51.613 "block_size": 512, 00:19:51.613 "num_blocks": 65536, 00:19:51.613 "uuid": "db9d62b9-c457-42bd-8960-5dfdb5f3c29a", 00:19:51.613 "assigned_rate_limits": { 00:19:51.613 "rw_ios_per_sec": 0, 00:19:51.613 "rw_mbytes_per_sec": 0, 00:19:51.613 "r_mbytes_per_sec": 0, 00:19:51.613 "w_mbytes_per_sec": 0 00:19:51.613 }, 00:19:51.613 "claimed": true, 00:19:51.613 "claim_type": "exclusive_write", 00:19:51.613 "zoned": false, 00:19:51.613 "supported_io_types": { 00:19:51.613 "read": true, 00:19:51.613 "write": true, 00:19:51.613 "unmap": true, 00:19:51.613 "write_zeroes": true, 00:19:51.613 "flush": true, 00:19:51.613 "reset": true, 00:19:51.613 "compare": false, 00:19:51.613 "compare_and_write": false, 00:19:51.613 "abort": true, 00:19:51.613 "nvme_admin": false, 00:19:51.613 "nvme_io": false 00:19:51.613 }, 00:19:51.613 "memory_domains": [ 00:19:51.613 { 00:19:51.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.613 "dma_device_type": 2 00:19:51.613 } 00:19:51.613 ], 00:19:51.613 "driver_specific": {} 00:19:51.613 } 00:19:51.613 ] 00:19:51.613 06:12:22 -- common/autotest_common.sh@895 -- # return 0 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:51.613 06:12:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:51.614 06:12:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:51.614 06:12:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.614 06:12:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.873 06:12:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.873 "name": "Existed_Raid", 00:19:51.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.873 "strip_size_kb": 0, 00:19:51.873 "state": "configuring", 00:19:51.873 "raid_level": "raid1", 00:19:51.873 "superblock": false, 00:19:51.873 "num_base_bdevs": 4, 00:19:51.873 "num_base_bdevs_discovered": 3, 00:19:51.873 "num_base_bdevs_operational": 4, 00:19:51.873 "base_bdevs_list": [ 00:19:51.873 { 00:19:51.873 "name": "BaseBdev1", 00:19:51.873 "uuid": "4c53b37b-7702-403b-9c67-1d2e5dd7226a", 00:19:51.873 "is_configured": true, 00:19:51.873 "data_offset": 0, 00:19:51.873 "data_size": 65536 00:19:51.873 }, 00:19:51.873 { 00:19:51.873 "name": "BaseBdev2", 00:19:51.873 "uuid": "520ae1b8-0d39-4de4-8abb-8ab9698ff9c8", 00:19:51.873 "is_configured": true, 00:19:51.873 "data_offset": 0, 00:19:51.873 "data_size": 65536 00:19:51.873 }, 00:19:51.873 { 00:19:51.873 "name": "BaseBdev3", 00:19:51.873 "uuid": "db9d62b9-c457-42bd-8960-5dfdb5f3c29a", 00:19:51.873 "is_configured": true, 00:19:51.873 "data_offset": 0, 00:19:51.873 "data_size": 65536 00:19:51.873 }, 00:19:51.873 { 00:19:51.873 "name": "BaseBdev4", 00:19:51.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.873 "is_configured": false, 00:19:51.873 "data_offset": 0, 00:19:51.873 "data_size": 0 00:19:51.873 } 00:19:51.873 ] 00:19:51.873 }' 00:19:51.873 06:12:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.873 06:12:22 -- common/autotest_common.sh@10 -- # set +x 00:19:52.441 06:12:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:52.701 [2024-06-11 06:12:23.167536] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:52.701 [2024-06-11 06:12:23.167612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:52.701 [2024-06-11 06:12:23.167621] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:52.701 [2024-06-11 06:12:23.167786] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:52.701 [2024-06-11 06:12:23.168474] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:52.701 [2024-06-11 06:12:23.168498] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:52.701 [2024-06-11 06:12:23.168913] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.701 BaseBdev4 00:19:52.701 06:12:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:52.701 06:12:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:52.701 06:12:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:52.701 06:12:23 -- common/autotest_common.sh@889 -- # local i 00:19:52.701 06:12:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:52.701 06:12:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:52.701 06:12:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:52.960 06:12:23 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:53.220 [ 00:19:53.220 { 00:19:53.220 "name": "BaseBdev4", 00:19:53.220 "aliases": [ 00:19:53.220 "114643cf-c329-48a5-a73c-cf6379c74046" 00:19:53.220 ], 00:19:53.220 "product_name": "Malloc disk", 00:19:53.220 "block_size": 512, 00:19:53.220 "num_blocks": 65536, 00:19:53.220 "uuid": "114643cf-c329-48a5-a73c-cf6379c74046", 00:19:53.220 "assigned_rate_limits": { 00:19:53.220 "rw_ios_per_sec": 0, 00:19:53.220 "rw_mbytes_per_sec": 0, 00:19:53.220 "r_mbytes_per_sec": 0, 00:19:53.220 "w_mbytes_per_sec": 0 00:19:53.220 }, 00:19:53.220 "claimed": true, 00:19:53.220 "claim_type": "exclusive_write", 00:19:53.220 "zoned": false, 00:19:53.220 "supported_io_types": { 00:19:53.220 "read": true, 00:19:53.220 "write": true, 00:19:53.220 "unmap": true, 00:19:53.220 "write_zeroes": true, 00:19:53.220 "flush": true, 00:19:53.220 "reset": true, 00:19:53.220 "compare": false, 00:19:53.220 "compare_and_write": false, 00:19:53.220 "abort": true, 00:19:53.220 "nvme_admin": false, 00:19:53.220 "nvme_io": false 00:19:53.220 }, 00:19:53.220 "memory_domains": [ 00:19:53.220 { 00:19:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.220 "dma_device_type": 2 00:19:53.220 } 00:19:53.220 ], 00:19:53.220 "driver_specific": {} 00:19:53.220 } 00:19:53.220 ] 00:19:53.220 06:12:23 -- common/autotest_common.sh@895 -- # return 0 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:53.220 "name": "Existed_Raid", 00:19:53.220 "uuid": "7801be4e-a24f-4b63-9d79-36fa5011e348", 00:19:53.220 "strip_size_kb": 0, 00:19:53.220 "state": "online", 00:19:53.220 "raid_level": "raid1", 00:19:53.220 "superblock": false, 00:19:53.220 "num_base_bdevs": 4, 00:19:53.220 "num_base_bdevs_discovered": 4, 00:19:53.220 "num_base_bdevs_operational": 4, 00:19:53.220 "base_bdevs_list": [ 00:19:53.220 { 00:19:53.220 "name": "BaseBdev1", 00:19:53.220 "uuid": "4c53b37b-7702-403b-9c67-1d2e5dd7226a", 00:19:53.220 "is_configured": true, 00:19:53.220 "data_offset": 0, 00:19:53.220 "data_size": 65536 00:19:53.220 }, 00:19:53.220 { 00:19:53.220 "name": "BaseBdev2", 00:19:53.220 "uuid": "520ae1b8-0d39-4de4-8abb-8ab9698ff9c8", 00:19:53.220 "is_configured": true, 00:19:53.220 "data_offset": 0, 00:19:53.220 "data_size": 65536 00:19:53.220 }, 00:19:53.220 { 00:19:53.220 "name": "BaseBdev3", 00:19:53.220 "uuid": "db9d62b9-c457-42bd-8960-5dfdb5f3c29a", 00:19:53.220 "is_configured": true, 00:19:53.220 "data_offset": 0, 00:19:53.220 "data_size": 65536 00:19:53.220 }, 00:19:53.220 { 00:19:53.220 "name": "BaseBdev4", 00:19:53.220 "uuid": "114643cf-c329-48a5-a73c-cf6379c74046", 00:19:53.220 "is_configured": true, 00:19:53.220 "data_offset": 0, 00:19:53.220 "data_size": 65536 00:19:53.220 } 00:19:53.220 ] 00:19:53.220 }' 00:19:53.220 06:12:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:53.220 06:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:54.157 06:12:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:54.157 [2024-06-11 06:12:24.688043] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.416 06:12:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.416 06:12:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:54.416 "name": "Existed_Raid", 00:19:54.416 "uuid": "7801be4e-a24f-4b63-9d79-36fa5011e348", 00:19:54.416 "strip_size_kb": 0, 00:19:54.416 "state": "online", 00:19:54.416 "raid_level": "raid1", 00:19:54.416 "superblock": false, 00:19:54.416 "num_base_bdevs": 4, 00:19:54.416 "num_base_bdevs_discovered": 3, 00:19:54.416 "num_base_bdevs_operational": 3, 00:19:54.416 "base_bdevs_list": [ 00:19:54.416 { 00:19:54.416 "name": null, 00:19:54.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.416 "is_configured": false, 00:19:54.416 "data_offset": 0, 00:19:54.416 "data_size": 65536 00:19:54.416 }, 00:19:54.416 { 00:19:54.416 "name": "BaseBdev2", 00:19:54.416 "uuid": "520ae1b8-0d39-4de4-8abb-8ab9698ff9c8", 00:19:54.416 "is_configured": true, 00:19:54.416 "data_offset": 0, 00:19:54.416 "data_size": 65536 00:19:54.416 }, 00:19:54.416 { 00:19:54.416 "name": "BaseBdev3", 00:19:54.416 "uuid": "db9d62b9-c457-42bd-8960-5dfdb5f3c29a", 00:19:54.416 "is_configured": true, 00:19:54.416 "data_offset": 0, 00:19:54.416 "data_size": 65536 00:19:54.416 }, 00:19:54.417 { 00:19:54.417 "name": "BaseBdev4", 00:19:54.417 "uuid": "114643cf-c329-48a5-a73c-cf6379c74046", 00:19:54.417 "is_configured": true, 00:19:54.417 "data_offset": 0, 00:19:54.417 "data_size": 65536 00:19:54.417 } 00:19:54.417 ] 00:19:54.417 }' 00:19:54.675 06:12:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:54.675 06:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:55.243 06:12:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:55.503 [2024-06-11 06:12:26.067414] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:55.761 06:12:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:55.761 06:12:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:55.761 06:12:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.761 06:12:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:56.020 06:12:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:56.020 06:12:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.020 06:12:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:56.020 [2024-06-11 06:12:26.604786] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:56.278 06:12:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:56.278 06:12:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:56.278 06:12:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.278 06:12:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:56.537 06:12:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:56.537 06:12:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:56.537 06:12:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:56.537 [2024-06-11 06:12:27.135836] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:56.537 [2024-06-11 06:12:27.135877] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.537 [2024-06-11 06:12:27.135947] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.796 [2024-06-11 06:12:27.239888] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.796 [2024-06-11 06:12:27.239938] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:56.796 06:12:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:56.796 06:12:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:56.796 06:12:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.796 06:12:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:56.796 06:12:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:57.054 06:12:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:57.054 06:12:27 -- bdev/bdev_raid.sh@287 -- # killprocess 121499 00:19:57.054 06:12:27 -- common/autotest_common.sh@926 -- # '[' -z 121499 ']' 00:19:57.054 06:12:27 -- common/autotest_common.sh@930 -- # kill -0 121499 00:19:57.054 06:12:27 -- common/autotest_common.sh@931 -- # uname 00:19:57.055 06:12:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:57.055 06:12:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121499 00:19:57.055 06:12:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:57.055 06:12:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:57.055 06:12:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121499' 00:19:57.055 killing process with pid 121499 00:19:57.055 06:12:27 -- common/autotest_common.sh@945 -- # kill 121499 00:19:57.055 06:12:27 -- common/autotest_common.sh@950 -- # wait 121499 00:19:57.055 [2024-06-11 06:12:27.474449] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.055 [2024-06-11 06:12:27.474758] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:58.433 00:19:58.433 real 0m13.659s 00:19:58.433 user 0m22.887s 00:19:58.433 sys 0m2.411s 00:19:58.433 06:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.433 ************************************ 00:19:58.433 END TEST raid_state_function_test 00:19:58.433 ************************************ 00:19:58.433 06:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:58.433 06:12:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:58.433 06:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.433 06:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.433 ************************************ 00:19:58.433 START TEST raid_state_function_test_sb 00:19:58.433 ************************************ 00:19:58.433 06:12:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@226 -- # raid_pid=121932 00:19:58.433 Process raid pid: 121932 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121932' 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:58.433 06:12:28 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121932 /var/tmp/spdk-raid.sock 00:19:58.433 06:12:28 -- common/autotest_common.sh@819 -- # '[' -z 121932 ']' 00:19:58.433 06:12:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:58.433 06:12:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:58.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:58.433 06:12:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:58.433 06:12:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:58.433 06:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.433 [2024-06-11 06:12:29.017270] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:19:58.433 [2024-06-11 06:12:29.017485] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.692 [2024-06-11 06:12:29.194492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.951 [2024-06-11 06:12:29.424248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.210 [2024-06-11 06:12:29.671121] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.469 06:12:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.469 06:12:29 -- common/autotest_common.sh@852 -- # return 0 00:19:59.469 06:12:29 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:59.728 [2024-06-11 06:12:30.148864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:59.728 [2024-06-11 06:12:30.148970] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:59.728 [2024-06-11 06:12:30.148983] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:59.728 [2024-06-11 06:12:30.149008] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:59.728 [2024-06-11 06:12:30.149015] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:59.728 [2024-06-11 06:12:30.149055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:59.728 [2024-06-11 06:12:30.149063] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:59.728 [2024-06-11 06:12:30.149087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.728 06:12:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.987 06:12:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:59.987 "name": "Existed_Raid", 00:19:59.987 "uuid": "d42103c7-0092-4dc1-b6e1-90334e6627a1", 00:19:59.987 "strip_size_kb": 0, 00:19:59.987 "state": "configuring", 00:19:59.987 "raid_level": "raid1", 00:19:59.987 "superblock": true, 00:19:59.987 "num_base_bdevs": 4, 00:19:59.987 "num_base_bdevs_discovered": 0, 00:19:59.987 "num_base_bdevs_operational": 4, 00:19:59.987 "base_bdevs_list": [ 00:19:59.987 { 00:19:59.987 "name": "BaseBdev1", 00:19:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.987 "is_configured": false, 00:19:59.987 "data_offset": 0, 00:19:59.987 "data_size": 0 00:19:59.987 }, 00:19:59.987 { 00:19:59.987 "name": "BaseBdev2", 00:19:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.987 "is_configured": false, 00:19:59.987 "data_offset": 0, 00:19:59.987 "data_size": 0 00:19:59.987 }, 00:19:59.987 { 00:19:59.987 "name": "BaseBdev3", 00:19:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.987 "is_configured": false, 00:19:59.987 "data_offset": 0, 00:19:59.987 "data_size": 0 00:19:59.987 }, 00:19:59.987 { 00:19:59.987 "name": "BaseBdev4", 00:19:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.987 "is_configured": false, 00:19:59.987 "data_offset": 0, 00:19:59.987 "data_size": 0 00:19:59.987 } 00:19:59.987 ] 00:19:59.987 }' 00:19:59.987 06:12:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:59.987 06:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.556 06:12:30 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:00.556 [2024-06-11 06:12:31.172881] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.556 [2024-06-11 06:12:31.172931] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:20:00.556 06:12:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:00.815 [2024-06-11 06:12:31.449009] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.815 [2024-06-11 06:12:31.449102] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.815 [2024-06-11 06:12:31.449111] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.815 [2024-06-11 06:12:31.449137] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.815 [2024-06-11 06:12:31.449144] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:00.815 [2024-06-11 06:12:31.449182] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:00.815 [2024-06-11 06:12:31.449189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:00.815 [2024-06-11 06:12:31.449212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:01.074 06:12:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:01.334 [2024-06-11 06:12:31.753035] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.334 BaseBdev1 00:20:01.334 06:12:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:20:01.334 06:12:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:01.334 06:12:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:01.334 06:12:31 -- common/autotest_common.sh@889 -- # local i 00:20:01.334 06:12:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:01.334 06:12:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:01.334 06:12:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:01.593 06:12:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:01.593 [ 00:20:01.593 { 00:20:01.593 "name": "BaseBdev1", 00:20:01.593 "aliases": [ 00:20:01.593 "9c04b865-b020-4715-ae2b-c74bb6170127" 00:20:01.593 ], 00:20:01.593 "product_name": "Malloc disk", 00:20:01.593 "block_size": 512, 00:20:01.593 "num_blocks": 65536, 00:20:01.593 "uuid": "9c04b865-b020-4715-ae2b-c74bb6170127", 00:20:01.593 "assigned_rate_limits": { 00:20:01.593 "rw_ios_per_sec": 0, 00:20:01.593 "rw_mbytes_per_sec": 0, 00:20:01.593 "r_mbytes_per_sec": 0, 00:20:01.593 "w_mbytes_per_sec": 0 00:20:01.593 }, 00:20:01.593 "claimed": true, 00:20:01.593 "claim_type": "exclusive_write", 00:20:01.593 "zoned": false, 00:20:01.593 "supported_io_types": { 00:20:01.593 "read": true, 00:20:01.593 "write": true, 00:20:01.593 "unmap": true, 00:20:01.593 "write_zeroes": true, 00:20:01.593 "flush": true, 00:20:01.593 "reset": true, 00:20:01.593 "compare": false, 00:20:01.593 "compare_and_write": false, 00:20:01.593 "abort": true, 00:20:01.593 "nvme_admin": false, 00:20:01.593 "nvme_io": false 00:20:01.593 }, 00:20:01.593 "memory_domains": [ 00:20:01.593 { 00:20:01.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.593 "dma_device_type": 2 00:20:01.593 } 00:20:01.593 ], 00:20:01.593 "driver_specific": {} 00:20:01.593 } 00:20:01.593 ] 00:20:01.593 06:12:32 -- common/autotest_common.sh@895 -- # return 0 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.593 06:12:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.851 06:12:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.851 "name": "Existed_Raid", 00:20:01.851 "uuid": "f4d0d703-ad80-4d24-b2bb-a25c46b8b875", 00:20:01.851 "strip_size_kb": 0, 00:20:01.851 "state": "configuring", 00:20:01.851 "raid_level": "raid1", 00:20:01.851 "superblock": true, 00:20:01.851 "num_base_bdevs": 4, 00:20:01.851 "num_base_bdevs_discovered": 1, 00:20:01.851 "num_base_bdevs_operational": 4, 00:20:01.852 "base_bdevs_list": [ 00:20:01.852 { 00:20:01.852 "name": "BaseBdev1", 00:20:01.852 "uuid": "9c04b865-b020-4715-ae2b-c74bb6170127", 00:20:01.852 "is_configured": true, 00:20:01.852 "data_offset": 2048, 00:20:01.852 "data_size": 63488 00:20:01.852 }, 00:20:01.852 { 00:20:01.852 "name": "BaseBdev2", 00:20:01.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.852 "is_configured": false, 00:20:01.852 "data_offset": 0, 00:20:01.852 "data_size": 0 00:20:01.852 }, 00:20:01.852 { 00:20:01.852 "name": "BaseBdev3", 00:20:01.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.852 "is_configured": false, 00:20:01.852 "data_offset": 0, 00:20:01.852 "data_size": 0 00:20:01.852 }, 00:20:01.852 { 00:20:01.852 "name": "BaseBdev4", 00:20:01.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.852 "is_configured": false, 00:20:01.852 "data_offset": 0, 00:20:01.852 "data_size": 0 00:20:01.852 } 00:20:01.852 ] 00:20:01.852 }' 00:20:01.852 06:12:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.852 06:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.419 06:12:32 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:02.679 [2024-06-11 06:12:33.085345] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:02.679 [2024-06-11 06:12:33.085427] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:02.679 06:12:33 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:20:02.679 06:12:33 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:02.937 06:12:33 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:03.197 BaseBdev1 00:20:03.197 06:12:33 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:20:03.197 06:12:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:20:03.197 06:12:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:03.197 06:12:33 -- common/autotest_common.sh@889 -- # local i 00:20:03.197 06:12:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:03.197 06:12:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:03.197 06:12:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:03.456 06:12:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:03.456 [ 00:20:03.456 { 00:20:03.456 "name": "BaseBdev1", 00:20:03.456 "aliases": [ 00:20:03.456 "095a7698-9c82-4007-9134-4a6e826c9676" 00:20:03.456 ], 00:20:03.456 "product_name": "Malloc disk", 00:20:03.456 "block_size": 512, 00:20:03.456 "num_blocks": 65536, 00:20:03.456 "uuid": "095a7698-9c82-4007-9134-4a6e826c9676", 00:20:03.457 "assigned_rate_limits": { 00:20:03.457 "rw_ios_per_sec": 0, 00:20:03.457 "rw_mbytes_per_sec": 0, 00:20:03.457 "r_mbytes_per_sec": 0, 00:20:03.457 "w_mbytes_per_sec": 0 00:20:03.457 }, 00:20:03.457 "claimed": false, 00:20:03.457 "zoned": false, 00:20:03.457 "supported_io_types": { 00:20:03.457 "read": true, 00:20:03.457 "write": true, 00:20:03.457 "unmap": true, 00:20:03.457 "write_zeroes": true, 00:20:03.457 "flush": true, 00:20:03.457 "reset": true, 00:20:03.457 "compare": false, 00:20:03.457 "compare_and_write": false, 00:20:03.457 "abort": true, 00:20:03.457 "nvme_admin": false, 00:20:03.457 "nvme_io": false 00:20:03.457 }, 00:20:03.457 "memory_domains": [ 00:20:03.457 { 00:20:03.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.457 "dma_device_type": 2 00:20:03.457 } 00:20:03.457 ], 00:20:03.457 "driver_specific": {} 00:20:03.457 } 00:20:03.457 ] 00:20:03.719 06:12:34 -- common/autotest_common.sh@895 -- # return 0 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:20:03.719 [2024-06-11 06:12:34.257908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.719 [2024-06-11 06:12:34.260205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:03.719 [2024-06-11 06:12:34.260304] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:03.719 [2024-06-11 06:12:34.260315] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:03.719 [2024-06-11 06:12:34.260341] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:03.719 [2024-06-11 06:12:34.260348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:03.719 [2024-06-11 06:12:34.260366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.719 06:12:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:03.980 06:12:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.980 "name": "Existed_Raid", 00:20:03.980 "uuid": "97e57db5-5b86-41ec-affe-c0ddd11a36d4", 00:20:03.980 "strip_size_kb": 0, 00:20:03.980 "state": "configuring", 00:20:03.980 "raid_level": "raid1", 00:20:03.980 "superblock": true, 00:20:03.980 "num_base_bdevs": 4, 00:20:03.980 "num_base_bdevs_discovered": 1, 00:20:03.980 "num_base_bdevs_operational": 4, 00:20:03.980 "base_bdevs_list": [ 00:20:03.980 { 00:20:03.980 "name": "BaseBdev1", 00:20:03.980 "uuid": "095a7698-9c82-4007-9134-4a6e826c9676", 00:20:03.980 "is_configured": true, 00:20:03.980 "data_offset": 2048, 00:20:03.980 "data_size": 63488 00:20:03.980 }, 00:20:03.980 { 00:20:03.980 "name": "BaseBdev2", 00:20:03.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.980 "is_configured": false, 00:20:03.980 "data_offset": 0, 00:20:03.980 "data_size": 0 00:20:03.980 }, 00:20:03.980 { 00:20:03.980 "name": "BaseBdev3", 00:20:03.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.980 "is_configured": false, 00:20:03.980 "data_offset": 0, 00:20:03.980 "data_size": 0 00:20:03.980 }, 00:20:03.980 { 00:20:03.980 "name": "BaseBdev4", 00:20:03.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.980 "is_configured": false, 00:20:03.980 "data_offset": 0, 00:20:03.980 "data_size": 0 00:20:03.980 } 00:20:03.980 ] 00:20:03.980 }' 00:20:03.980 06:12:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.980 06:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.548 06:12:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:04.549 [2024-06-11 06:12:35.181005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.549 BaseBdev2 00:20:04.808 06:12:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:20:04.808 06:12:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:20:04.808 06:12:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:04.808 06:12:35 -- common/autotest_common.sh@889 -- # local i 00:20:04.808 06:12:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:04.808 06:12:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:04.808 06:12:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:04.808 06:12:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:05.067 [ 00:20:05.067 { 00:20:05.067 "name": "BaseBdev2", 00:20:05.067 "aliases": [ 00:20:05.067 "3f5bb4cf-4800-482d-8f33-abfa335f1d74" 00:20:05.067 ], 00:20:05.067 "product_name": "Malloc disk", 00:20:05.067 "block_size": 512, 00:20:05.067 "num_blocks": 65536, 00:20:05.067 "uuid": "3f5bb4cf-4800-482d-8f33-abfa335f1d74", 00:20:05.067 "assigned_rate_limits": { 00:20:05.067 "rw_ios_per_sec": 0, 00:20:05.067 "rw_mbytes_per_sec": 0, 00:20:05.067 "r_mbytes_per_sec": 0, 00:20:05.067 "w_mbytes_per_sec": 0 00:20:05.067 }, 00:20:05.067 "claimed": true, 00:20:05.067 "claim_type": "exclusive_write", 00:20:05.067 "zoned": false, 00:20:05.067 "supported_io_types": { 00:20:05.067 "read": true, 00:20:05.067 "write": true, 00:20:05.067 "unmap": true, 00:20:05.067 "write_zeroes": true, 00:20:05.067 "flush": true, 00:20:05.067 "reset": true, 00:20:05.067 "compare": false, 00:20:05.067 "compare_and_write": false, 00:20:05.067 "abort": true, 00:20:05.067 "nvme_admin": false, 00:20:05.067 "nvme_io": false 00:20:05.067 }, 00:20:05.067 "memory_domains": [ 00:20:05.067 { 00:20:05.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:05.067 "dma_device_type": 2 00:20:05.067 } 00:20:05.067 ], 00:20:05.067 "driver_specific": {} 00:20:05.067 } 00:20:05.067 ] 00:20:05.067 06:12:35 -- common/autotest_common.sh@895 -- # return 0 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:05.067 06:12:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.326 06:12:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.326 "name": "Existed_Raid", 00:20:05.326 "uuid": "97e57db5-5b86-41ec-affe-c0ddd11a36d4", 00:20:05.326 "strip_size_kb": 0, 00:20:05.326 "state": "configuring", 00:20:05.326 "raid_level": "raid1", 00:20:05.326 "superblock": true, 00:20:05.326 "num_base_bdevs": 4, 00:20:05.326 "num_base_bdevs_discovered": 2, 00:20:05.326 "num_base_bdevs_operational": 4, 00:20:05.326 "base_bdevs_list": [ 00:20:05.326 { 00:20:05.326 "name": "BaseBdev1", 00:20:05.326 "uuid": "095a7698-9c82-4007-9134-4a6e826c9676", 00:20:05.326 "is_configured": true, 00:20:05.326 "data_offset": 2048, 00:20:05.326 "data_size": 63488 00:20:05.326 }, 00:20:05.326 { 00:20:05.326 "name": "BaseBdev2", 00:20:05.326 "uuid": "3f5bb4cf-4800-482d-8f33-abfa335f1d74", 00:20:05.326 "is_configured": true, 00:20:05.326 "data_offset": 2048, 00:20:05.326 "data_size": 63488 00:20:05.326 }, 00:20:05.326 { 00:20:05.326 "name": "BaseBdev3", 00:20:05.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.326 "is_configured": false, 00:20:05.326 "data_offset": 0, 00:20:05.326 "data_size": 0 00:20:05.326 }, 00:20:05.326 { 00:20:05.326 "name": "BaseBdev4", 00:20:05.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.326 "is_configured": false, 00:20:05.326 "data_offset": 0, 00:20:05.326 "data_size": 0 00:20:05.326 } 00:20:05.326 ] 00:20:05.326 }' 00:20:05.326 06:12:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.326 06:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.895 06:12:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:06.154 [2024-06-11 06:12:36.638816] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:06.154 BaseBdev3 00:20:06.154 06:12:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:20:06.154 06:12:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:20:06.154 06:12:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:06.154 06:12:36 -- common/autotest_common.sh@889 -- # local i 00:20:06.154 06:12:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:06.154 06:12:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:06.154 06:12:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:06.413 06:12:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:06.673 [ 00:20:06.673 { 00:20:06.673 "name": "BaseBdev3", 00:20:06.673 "aliases": [ 00:20:06.673 "1d392265-31b8-4e3d-971e-e3da5b897209" 00:20:06.673 ], 00:20:06.673 "product_name": "Malloc disk", 00:20:06.673 "block_size": 512, 00:20:06.673 "num_blocks": 65536, 00:20:06.673 "uuid": "1d392265-31b8-4e3d-971e-e3da5b897209", 00:20:06.673 "assigned_rate_limits": { 00:20:06.673 "rw_ios_per_sec": 0, 00:20:06.673 "rw_mbytes_per_sec": 0, 00:20:06.673 "r_mbytes_per_sec": 0, 00:20:06.673 "w_mbytes_per_sec": 0 00:20:06.673 }, 00:20:06.673 "claimed": true, 00:20:06.673 "claim_type": "exclusive_write", 00:20:06.673 "zoned": false, 00:20:06.673 "supported_io_types": { 00:20:06.673 "read": true, 00:20:06.673 "write": true, 00:20:06.673 "unmap": true, 00:20:06.673 "write_zeroes": true, 00:20:06.673 "flush": true, 00:20:06.673 "reset": true, 00:20:06.673 "compare": false, 00:20:06.673 "compare_and_write": false, 00:20:06.673 "abort": true, 00:20:06.673 "nvme_admin": false, 00:20:06.673 "nvme_io": false 00:20:06.673 }, 00:20:06.673 "memory_domains": [ 00:20:06.673 { 00:20:06.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.673 "dma_device_type": 2 00:20:06.673 } 00:20:06.673 ], 00:20:06.673 "driver_specific": {} 00:20:06.673 } 00:20:06.673 ] 00:20:06.673 06:12:37 -- common/autotest_common.sh@895 -- # return 0 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.673 06:12:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:06.933 06:12:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:06.933 "name": "Existed_Raid", 00:20:06.933 "uuid": "97e57db5-5b86-41ec-affe-c0ddd11a36d4", 00:20:06.933 "strip_size_kb": 0, 00:20:06.933 "state": "configuring", 00:20:06.933 "raid_level": "raid1", 00:20:06.933 "superblock": true, 00:20:06.933 "num_base_bdevs": 4, 00:20:06.933 "num_base_bdevs_discovered": 3, 00:20:06.933 "num_base_bdevs_operational": 4, 00:20:06.933 "base_bdevs_list": [ 00:20:06.933 { 00:20:06.933 "name": "BaseBdev1", 00:20:06.933 "uuid": "095a7698-9c82-4007-9134-4a6e826c9676", 00:20:06.933 "is_configured": true, 00:20:06.933 "data_offset": 2048, 00:20:06.933 "data_size": 63488 00:20:06.933 }, 00:20:06.933 { 00:20:06.933 "name": "BaseBdev2", 00:20:06.933 "uuid": "3f5bb4cf-4800-482d-8f33-abfa335f1d74", 00:20:06.933 "is_configured": true, 00:20:06.933 "data_offset": 2048, 00:20:06.933 "data_size": 63488 00:20:06.933 }, 00:20:06.933 { 00:20:06.933 "name": "BaseBdev3", 00:20:06.933 "uuid": "1d392265-31b8-4e3d-971e-e3da5b897209", 00:20:06.933 "is_configured": true, 00:20:06.933 "data_offset": 2048, 00:20:06.933 "data_size": 63488 00:20:06.933 }, 00:20:06.933 { 00:20:06.933 "name": "BaseBdev4", 00:20:06.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.933 "is_configured": false, 00:20:06.933 "data_offset": 0, 00:20:06.933 "data_size": 0 00:20:06.933 } 00:20:06.933 ] 00:20:06.933 }' 00:20:06.933 06:12:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:06.933 06:12:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.502 06:12:37 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:07.761 [2024-06-11 06:12:38.280558] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:07.761 [2024-06-11 06:12:38.280840] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:07.761 [2024-06-11 06:12:38.280853] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:07.761 [2024-06-11 06:12:38.281011] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:07.761 [2024-06-11 06:12:38.281357] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:07.761 [2024-06-11 06:12:38.281376] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:07.761 [2024-06-11 06:12:38.281541] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.761 BaseBdev4 00:20:07.761 06:12:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:20:07.761 06:12:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:20:07.761 06:12:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:20:07.761 06:12:38 -- common/autotest_common.sh@889 -- # local i 00:20:07.761 06:12:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:20:07.761 06:12:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:20:07.761 06:12:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.021 06:12:38 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:08.280 [ 00:20:08.280 { 00:20:08.280 "name": "BaseBdev4", 00:20:08.280 "aliases": [ 00:20:08.280 "ee82e215-2b88-49af-8c6f-d050387d63fa" 00:20:08.280 ], 00:20:08.280 "product_name": "Malloc disk", 00:20:08.280 "block_size": 512, 00:20:08.280 "num_blocks": 65536, 00:20:08.280 "uuid": "ee82e215-2b88-49af-8c6f-d050387d63fa", 00:20:08.280 "assigned_rate_limits": { 00:20:08.280 "rw_ios_per_sec": 0, 00:20:08.280 "rw_mbytes_per_sec": 0, 00:20:08.280 "r_mbytes_per_sec": 0, 00:20:08.280 "w_mbytes_per_sec": 0 00:20:08.280 }, 00:20:08.280 "claimed": true, 00:20:08.280 "claim_type": "exclusive_write", 00:20:08.280 "zoned": false, 00:20:08.280 "supported_io_types": { 00:20:08.280 "read": true, 00:20:08.280 "write": true, 00:20:08.280 "unmap": true, 00:20:08.280 "write_zeroes": true, 00:20:08.280 "flush": true, 00:20:08.280 "reset": true, 00:20:08.280 "compare": false, 00:20:08.280 "compare_and_write": false, 00:20:08.280 "abort": true, 00:20:08.280 "nvme_admin": false, 00:20:08.280 "nvme_io": false 00:20:08.280 }, 00:20:08.280 "memory_domains": [ 00:20:08.280 { 00:20:08.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.280 "dma_device_type": 2 00:20:08.280 } 00:20:08.280 ], 00:20:08.280 "driver_specific": {} 00:20:08.280 } 00:20:08.280 ] 00:20:08.280 06:12:38 -- common/autotest_common.sh@895 -- # return 0 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.280 06:12:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.540 06:12:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.540 "name": "Existed_Raid", 00:20:08.540 "uuid": "97e57db5-5b86-41ec-affe-c0ddd11a36d4", 00:20:08.540 "strip_size_kb": 0, 00:20:08.540 "state": "online", 00:20:08.540 "raid_level": "raid1", 00:20:08.540 "superblock": true, 00:20:08.540 "num_base_bdevs": 4, 00:20:08.540 "num_base_bdevs_discovered": 4, 00:20:08.540 "num_base_bdevs_operational": 4, 00:20:08.540 "base_bdevs_list": [ 00:20:08.540 { 00:20:08.540 "name": "BaseBdev1", 00:20:08.540 "uuid": "095a7698-9c82-4007-9134-4a6e826c9676", 00:20:08.540 "is_configured": true, 00:20:08.540 "data_offset": 2048, 00:20:08.540 "data_size": 63488 00:20:08.540 }, 00:20:08.540 { 00:20:08.540 "name": "BaseBdev2", 00:20:08.540 "uuid": "3f5bb4cf-4800-482d-8f33-abfa335f1d74", 00:20:08.540 "is_configured": true, 00:20:08.540 "data_offset": 2048, 00:20:08.540 "data_size": 63488 00:20:08.540 }, 00:20:08.540 { 00:20:08.540 "name": "BaseBdev3", 00:20:08.540 "uuid": "1d392265-31b8-4e3d-971e-e3da5b897209", 00:20:08.540 "is_configured": true, 00:20:08.540 "data_offset": 2048, 00:20:08.540 "data_size": 63488 00:20:08.540 }, 00:20:08.540 { 00:20:08.540 "name": "BaseBdev4", 00:20:08.540 "uuid": "ee82e215-2b88-49af-8c6f-d050387d63fa", 00:20:08.540 "is_configured": true, 00:20:08.540 "data_offset": 2048, 00:20:08.540 "data_size": 63488 00:20:08.540 } 00:20:08.540 ] 00:20:08.540 }' 00:20:08.540 06:12:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.540 06:12:38 -- common/autotest_common.sh@10 -- # set +x 00:20:09.108 06:12:39 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:09.368 [2024-06-11 06:12:39.764918] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.368 06:12:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.632 06:12:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:09.632 "name": "Existed_Raid", 00:20:09.632 "uuid": "97e57db5-5b86-41ec-affe-c0ddd11a36d4", 00:20:09.632 "strip_size_kb": 0, 00:20:09.632 "state": "online", 00:20:09.632 "raid_level": "raid1", 00:20:09.632 "superblock": true, 00:20:09.632 "num_base_bdevs": 4, 00:20:09.632 "num_base_bdevs_discovered": 3, 00:20:09.632 "num_base_bdevs_operational": 3, 00:20:09.632 "base_bdevs_list": [ 00:20:09.632 { 00:20:09.632 "name": null, 00:20:09.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.632 "is_configured": false, 00:20:09.632 "data_offset": 2048, 00:20:09.632 "data_size": 63488 00:20:09.632 }, 00:20:09.632 { 00:20:09.632 "name": "BaseBdev2", 00:20:09.633 "uuid": "3f5bb4cf-4800-482d-8f33-abfa335f1d74", 00:20:09.633 "is_configured": true, 00:20:09.633 "data_offset": 2048, 00:20:09.633 "data_size": 63488 00:20:09.633 }, 00:20:09.633 { 00:20:09.633 "name": "BaseBdev3", 00:20:09.633 "uuid": "1d392265-31b8-4e3d-971e-e3da5b897209", 00:20:09.633 "is_configured": true, 00:20:09.633 "data_offset": 2048, 00:20:09.633 "data_size": 63488 00:20:09.633 }, 00:20:09.633 { 00:20:09.633 "name": "BaseBdev4", 00:20:09.633 "uuid": "ee82e215-2b88-49af-8c6f-d050387d63fa", 00:20:09.633 "is_configured": true, 00:20:09.633 "data_offset": 2048, 00:20:09.633 "data_size": 63488 00:20:09.633 } 00:20:09.633 ] 00:20:09.633 }' 00:20:09.633 06:12:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:09.633 06:12:40 -- common/autotest_common.sh@10 -- # set +x 00:20:10.203 06:12:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:20:10.203 06:12:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:10.203 06:12:40 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.203 06:12:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:10.462 06:12:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:10.462 06:12:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.462 06:12:40 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:10.721 [2024-06-11 06:12:41.201710] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:10.721 06:12:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:10.721 06:12:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:10.721 06:12:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.721 06:12:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:10.980 06:12:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:10.980 06:12:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:10.980 06:12:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:11.239 [2024-06-11 06:12:41.797188] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:11.497 06:12:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:11.497 06:12:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:11.497 06:12:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.497 06:12:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:20:11.756 06:12:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:20:11.756 06:12:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:11.756 06:12:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:20:11.756 [2024-06-11 06:12:42.372737] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:11.756 [2024-06-11 06:12:42.372777] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:11.756 [2024-06-11 06:12:42.372879] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.016 [2024-06-11 06:12:42.473576] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.016 [2024-06-11 06:12:42.473627] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:12.016 06:12:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:20:12.016 06:12:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:20:12.016 06:12:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.016 06:12:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:20:12.275 06:12:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:20:12.275 06:12:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:20:12.275 06:12:42 -- bdev/bdev_raid.sh@287 -- # killprocess 121932 00:20:12.275 06:12:42 -- common/autotest_common.sh@926 -- # '[' -z 121932 ']' 00:20:12.275 06:12:42 -- common/autotest_common.sh@930 -- # kill -0 121932 00:20:12.275 06:12:42 -- common/autotest_common.sh@931 -- # uname 00:20:12.275 06:12:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:12.275 06:12:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121932 00:20:12.275 06:12:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:12.275 06:12:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:12.275 06:12:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121932' 00:20:12.275 killing process with pid 121932 00:20:12.275 06:12:42 -- common/autotest_common.sh@945 -- # kill 121932 00:20:12.275 [2024-06-11 06:12:42.768660] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.275 [2024-06-11 06:12:42.768838] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:12.275 06:12:42 -- common/autotest_common.sh@950 -- # wait 121932 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:20:13.654 00:20:13.654 real 0m15.264s 00:20:13.654 user 0m25.723s 00:20:13.654 sys 0m2.608s 00:20:13.654 06:12:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.654 ************************************ 00:20:13.654 06:12:44 -- common/autotest_common.sh@10 -- # set +x 00:20:13.654 END TEST raid_state_function_test_sb 00:20:13.654 ************************************ 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:20:13.654 06:12:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:20:13.654 06:12:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:13.654 06:12:44 -- common/autotest_common.sh@10 -- # set +x 00:20:13.654 ************************************ 00:20:13.654 START TEST raid_superblock_test 00:20:13.654 ************************************ 00:20:13.654 06:12:44 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=122386 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:13.654 06:12:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122386 /var/tmp/spdk-raid.sock 00:20:13.654 06:12:44 -- common/autotest_common.sh@819 -- # '[' -z 122386 ']' 00:20:13.654 06:12:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:13.654 06:12:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:13.654 06:12:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:13.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:13.654 06:12:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:13.654 06:12:44 -- common/autotest_common.sh@10 -- # set +x 00:20:13.914 [2024-06-11 06:12:44.326120] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:13.914 [2024-06-11 06:12:44.326993] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122386 ] 00:20:13.914 [2024-06-11 06:12:44.492086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.191 [2024-06-11 06:12:44.728526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.461 [2024-06-11 06:12:44.968880] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.720 06:12:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:14.720 06:12:45 -- common/autotest_common.sh@852 -- # return 0 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:14.720 06:12:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:14.980 malloc1 00:20:14.980 06:12:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:15.239 [2024-06-11 06:12:45.730674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:15.239 [2024-06-11 06:12:45.730803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.239 [2024-06-11 06:12:45.730850] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:15.239 [2024-06-11 06:12:45.730901] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.239 [2024-06-11 06:12:45.733665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.239 [2024-06-11 06:12:45.733721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:15.239 pt1 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:15.239 06:12:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:15.498 malloc2 00:20:15.498 06:12:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:15.757 [2024-06-11 06:12:46.151612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:15.757 [2024-06-11 06:12:46.151715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.757 [2024-06-11 06:12:46.151760] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:15.757 [2024-06-11 06:12:46.151821] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.758 [2024-06-11 06:12:46.154494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.758 [2024-06-11 06:12:46.154545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:15.758 pt2 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:15.758 malloc3 00:20:15.758 06:12:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:16.017 [2024-06-11 06:12:46.531039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:16.017 [2024-06-11 06:12:46.531141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.017 [2024-06-11 06:12:46.531188] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:16.017 [2024-06-11 06:12:46.531232] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.017 [2024-06-11 06:12:46.533875] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.017 [2024-06-11 06:12:46.533933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:16.017 pt3 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:16.017 06:12:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:20:16.276 malloc4 00:20:16.276 06:12:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:16.535 [2024-06-11 06:12:46.922373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:16.535 [2024-06-11 06:12:46.922466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.535 [2024-06-11 06:12:46.922500] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:20:16.535 [2024-06-11 06:12:46.922546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.535 [2024-06-11 06:12:46.925198] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.535 [2024-06-11 06:12:46.925253] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:16.535 pt4 00:20:16.535 06:12:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:20:16.535 06:12:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:20:16.535 06:12:46 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:20:16.535 [2024-06-11 06:12:47.094474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:16.535 [2024-06-11 06:12:47.096728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.535 [2024-06-11 06:12:47.096814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:16.535 [2024-06-11 06:12:47.096861] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:16.535 [2024-06-11 06:12:47.097102] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:20:16.535 [2024-06-11 06:12:47.097113] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:16.535 [2024-06-11 06:12:47.097263] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:16.536 [2024-06-11 06:12:47.097642] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:20:16.536 [2024-06-11 06:12:47.097661] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:20:16.536 [2024-06-11 06:12:47.097819] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.536 06:12:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.795 06:12:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.795 "name": "raid_bdev1", 00:20:16.795 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:16.795 "strip_size_kb": 0, 00:20:16.795 "state": "online", 00:20:16.795 "raid_level": "raid1", 00:20:16.795 "superblock": true, 00:20:16.795 "num_base_bdevs": 4, 00:20:16.795 "num_base_bdevs_discovered": 4, 00:20:16.795 "num_base_bdevs_operational": 4, 00:20:16.795 "base_bdevs_list": [ 00:20:16.795 { 00:20:16.795 "name": "pt1", 00:20:16.795 "uuid": "9653f530-9d97-5c44-a081-0fdb175cb0c1", 00:20:16.795 "is_configured": true, 00:20:16.795 "data_offset": 2048, 00:20:16.795 "data_size": 63488 00:20:16.795 }, 00:20:16.795 { 00:20:16.795 "name": "pt2", 00:20:16.795 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:16.795 "is_configured": true, 00:20:16.795 "data_offset": 2048, 00:20:16.795 "data_size": 63488 00:20:16.795 }, 00:20:16.795 { 00:20:16.795 "name": "pt3", 00:20:16.795 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:16.795 "is_configured": true, 00:20:16.795 "data_offset": 2048, 00:20:16.795 "data_size": 63488 00:20:16.795 }, 00:20:16.795 { 00:20:16.795 "name": "pt4", 00:20:16.795 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:16.795 "is_configured": true, 00:20:16.795 "data_offset": 2048, 00:20:16.795 "data_size": 63488 00:20:16.795 } 00:20:16.795 ] 00:20:16.795 }' 00:20:16.795 06:12:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.795 06:12:47 -- common/autotest_common.sh@10 -- # set +x 00:20:17.362 06:12:47 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:17.362 06:12:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:20:17.621 [2024-06-11 06:12:48.098815] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.621 06:12:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=faf43d97-d2c9-4933-a452-41c7f262d526 00:20:17.621 06:12:48 -- bdev/bdev_raid.sh@380 -- # '[' -z faf43d97-d2c9-4933-a452-41c7f262d526 ']' 00:20:17.621 06:12:48 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:17.880 [2024-06-11 06:12:48.274604] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.880 [2024-06-11 06:12:48.274636] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.880 [2024-06-11 06:12:48.274738] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.880 [2024-06-11 06:12:48.274831] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.880 [2024-06-11 06:12:48.274840] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:20:17.880 06:12:48 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.880 06:12:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:20:18.139 06:12:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:20:18.139 06:12:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:20:18.139 06:12:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:18.139 06:12:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:18.139 06:12:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:18.139 06:12:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:18.398 06:12:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:18.398 06:12:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:18.658 06:12:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:18.658 06:12:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:18.658 06:12:49 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:18.658 06:12:49 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:18.917 06:12:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:18.917 06:12:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:18.917 06:12:49 -- common/autotest_common.sh@640 -- # local es=0 00:20:18.917 06:12:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:18.917 06:12:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.917 06:12:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.917 06:12:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.917 06:12:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.917 06:12:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.917 06:12:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.917 06:12:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:18.917 06:12:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:18.917 06:12:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:19.176 [2024-06-11 06:12:49.686817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:19.176 [2024-06-11 06:12:49.689105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:19.176 [2024-06-11 06:12:49.689176] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:19.176 [2024-06-11 06:12:49.689207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:19.176 [2024-06-11 06:12:49.689258] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:19.176 [2024-06-11 06:12:49.689350] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:19.176 [2024-06-11 06:12:49.689378] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:19.176 [2024-06-11 06:12:49.689450] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:19.176 [2024-06-11 06:12:49.689474] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.176 [2024-06-11 06:12:49.689484] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:20:19.176 request: 00:20:19.176 { 00:20:19.176 "name": "raid_bdev1", 00:20:19.176 "raid_level": "raid1", 00:20:19.176 "base_bdevs": [ 00:20:19.176 "malloc1", 00:20:19.176 "malloc2", 00:20:19.176 "malloc3", 00:20:19.176 "malloc4" 00:20:19.176 ], 00:20:19.176 "superblock": false, 00:20:19.176 "method": "bdev_raid_create", 00:20:19.176 "req_id": 1 00:20:19.176 } 00:20:19.176 Got JSON-RPC error response 00:20:19.176 response: 00:20:19.176 { 00:20:19.176 "code": -17, 00:20:19.176 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:19.176 } 00:20:19.176 06:12:49 -- common/autotest_common.sh@643 -- # es=1 00:20:19.176 06:12:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.176 06:12:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:19.176 06:12:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.176 06:12:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:19.176 06:12:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.435 06:12:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:19.435 06:12:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:19.435 06:12:49 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:19.693 [2024-06-11 06:12:50.106899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:19.693 [2024-06-11 06:12:50.107012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.693 [2024-06-11 06:12:50.107049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:19.693 [2024-06-11 06:12:50.107077] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.693 [2024-06-11 06:12:50.109758] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.693 [2024-06-11 06:12:50.109833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:19.693 [2024-06-11 06:12:50.109968] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:19.693 [2024-06-11 06:12:50.110022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:19.693 pt1 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:19.693 "name": "raid_bdev1", 00:20:19.693 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:19.693 "strip_size_kb": 0, 00:20:19.693 "state": "configuring", 00:20:19.693 "raid_level": "raid1", 00:20:19.693 "superblock": true, 00:20:19.693 "num_base_bdevs": 4, 00:20:19.693 "num_base_bdevs_discovered": 1, 00:20:19.693 "num_base_bdevs_operational": 4, 00:20:19.693 "base_bdevs_list": [ 00:20:19.693 { 00:20:19.693 "name": "pt1", 00:20:19.693 "uuid": "9653f530-9d97-5c44-a081-0fdb175cb0c1", 00:20:19.693 "is_configured": true, 00:20:19.693 "data_offset": 2048, 00:20:19.693 "data_size": 63488 00:20:19.693 }, 00:20:19.693 { 00:20:19.693 "name": null, 00:20:19.693 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:19.693 "is_configured": false, 00:20:19.693 "data_offset": 2048, 00:20:19.693 "data_size": 63488 00:20:19.693 }, 00:20:19.693 { 00:20:19.693 "name": null, 00:20:19.693 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:19.693 "is_configured": false, 00:20:19.693 "data_offset": 2048, 00:20:19.693 "data_size": 63488 00:20:19.693 }, 00:20:19.693 { 00:20:19.693 "name": null, 00:20:19.693 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:19.693 "is_configured": false, 00:20:19.693 "data_offset": 2048, 00:20:19.693 "data_size": 63488 00:20:19.693 } 00:20:19.693 ] 00:20:19.693 }' 00:20:19.693 06:12:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:19.693 06:12:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.266 06:12:50 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:20.266 06:12:50 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:20.525 [2024-06-11 06:12:51.023046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:20.525 [2024-06-11 06:12:51.023153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:20.525 [2024-06-11 06:12:51.023196] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:20.526 [2024-06-11 06:12:51.023220] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:20.526 [2024-06-11 06:12:51.023775] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:20.526 [2024-06-11 06:12:51.023827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:20.526 [2024-06-11 06:12:51.023955] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:20.526 [2024-06-11 06:12:51.023983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:20.526 pt2 00:20:20.526 06:12:51 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:20.785 [2024-06-11 06:12:51.199104] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:20.785 "name": "raid_bdev1", 00:20:20.785 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:20.785 "strip_size_kb": 0, 00:20:20.785 "state": "configuring", 00:20:20.785 "raid_level": "raid1", 00:20:20.785 "superblock": true, 00:20:20.785 "num_base_bdevs": 4, 00:20:20.785 "num_base_bdevs_discovered": 1, 00:20:20.785 "num_base_bdevs_operational": 4, 00:20:20.785 "base_bdevs_list": [ 00:20:20.785 { 00:20:20.785 "name": "pt1", 00:20:20.785 "uuid": "9653f530-9d97-5c44-a081-0fdb175cb0c1", 00:20:20.785 "is_configured": true, 00:20:20.785 "data_offset": 2048, 00:20:20.785 "data_size": 63488 00:20:20.785 }, 00:20:20.785 { 00:20:20.785 "name": null, 00:20:20.785 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:20.785 "is_configured": false, 00:20:20.785 "data_offset": 2048, 00:20:20.785 "data_size": 63488 00:20:20.785 }, 00:20:20.785 { 00:20:20.785 "name": null, 00:20:20.785 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:20.785 "is_configured": false, 00:20:20.785 "data_offset": 2048, 00:20:20.785 "data_size": 63488 00:20:20.785 }, 00:20:20.785 { 00:20:20.785 "name": null, 00:20:20.785 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:20.785 "is_configured": false, 00:20:20.785 "data_offset": 2048, 00:20:20.785 "data_size": 63488 00:20:20.785 } 00:20:20.785 ] 00:20:20.785 }' 00:20:20.785 06:12:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:20.785 06:12:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.353 06:12:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:21.353 06:12:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:21.353 06:12:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:21.611 [2024-06-11 06:12:52.135260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:21.611 [2024-06-11 06:12:52.135354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.611 [2024-06-11 06:12:52.135396] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:21.611 [2024-06-11 06:12:52.135419] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.611 [2024-06-11 06:12:52.135949] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.611 [2024-06-11 06:12:52.136020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:21.611 [2024-06-11 06:12:52.136128] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:21.611 [2024-06-11 06:12:52.136149] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:21.611 pt2 00:20:21.611 06:12:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:21.611 06:12:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:21.611 06:12:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:21.869 [2024-06-11 06:12:52.379307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:21.869 [2024-06-11 06:12:52.379392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.869 [2024-06-11 06:12:52.379431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:21.869 [2024-06-11 06:12:52.379461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.869 [2024-06-11 06:12:52.379958] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.869 [2024-06-11 06:12:52.380018] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:21.870 [2024-06-11 06:12:52.380121] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:21.870 [2024-06-11 06:12:52.380141] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:21.870 pt3 00:20:21.870 06:12:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:21.870 06:12:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:21.870 06:12:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:22.129 [2024-06-11 06:12:52.561087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:22.129 [2024-06-11 06:12:52.561180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.129 [2024-06-11 06:12:52.561218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:22.129 [2024-06-11 06:12:52.561246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.129 [2024-06-11 06:12:52.561748] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.129 [2024-06-11 06:12:52.561809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:22.129 [2024-06-11 06:12:52.561925] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:22.129 [2024-06-11 06:12:52.561949] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:22.129 [2024-06-11 06:12:52.562094] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:20:22.129 [2024-06-11 06:12:52.562102] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:22.129 [2024-06-11 06:12:52.562204] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:22.129 [2024-06-11 06:12:52.562541] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:20:22.129 [2024-06-11 06:12:52.562559] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:20:22.129 [2024-06-11 06:12:52.562702] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:22.129 pt4 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:22.129 06:12:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.387 06:12:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:22.387 "name": "raid_bdev1", 00:20:22.387 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:22.387 "strip_size_kb": 0, 00:20:22.387 "state": "online", 00:20:22.387 "raid_level": "raid1", 00:20:22.387 "superblock": true, 00:20:22.387 "num_base_bdevs": 4, 00:20:22.387 "num_base_bdevs_discovered": 4, 00:20:22.387 "num_base_bdevs_operational": 4, 00:20:22.387 "base_bdevs_list": [ 00:20:22.387 { 00:20:22.387 "name": "pt1", 00:20:22.387 "uuid": "9653f530-9d97-5c44-a081-0fdb175cb0c1", 00:20:22.387 "is_configured": true, 00:20:22.387 "data_offset": 2048, 00:20:22.387 "data_size": 63488 00:20:22.387 }, 00:20:22.387 { 00:20:22.387 "name": "pt2", 00:20:22.387 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:22.387 "is_configured": true, 00:20:22.387 "data_offset": 2048, 00:20:22.387 "data_size": 63488 00:20:22.387 }, 00:20:22.387 { 00:20:22.387 "name": "pt3", 00:20:22.388 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:22.388 "is_configured": true, 00:20:22.388 "data_offset": 2048, 00:20:22.388 "data_size": 63488 00:20:22.388 }, 00:20:22.388 { 00:20:22.388 "name": "pt4", 00:20:22.388 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:22.388 "is_configured": true, 00:20:22.388 "data_offset": 2048, 00:20:22.388 "data_size": 63488 00:20:22.388 } 00:20:22.388 ] 00:20:22.388 }' 00:20:22.388 06:12:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:22.388 06:12:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.956 06:12:53 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:22.956 06:12:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:23.215 [2024-06-11 06:12:53.605452] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:23.215 06:12:53 -- bdev/bdev_raid.sh@430 -- # '[' faf43d97-d2c9-4933-a452-41c7f262d526 '!=' faf43d97-d2c9-4933-a452-41c7f262d526 ']' 00:20:23.215 06:12:53 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:23.215 06:12:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:23.215 06:12:53 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:23.215 06:12:53 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:23.475 [2024-06-11 06:12:53.861328] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.475 06:12:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.734 06:12:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:23.734 "name": "raid_bdev1", 00:20:23.734 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:23.734 "strip_size_kb": 0, 00:20:23.734 "state": "online", 00:20:23.734 "raid_level": "raid1", 00:20:23.734 "superblock": true, 00:20:23.734 "num_base_bdevs": 4, 00:20:23.734 "num_base_bdevs_discovered": 3, 00:20:23.734 "num_base_bdevs_operational": 3, 00:20:23.734 "base_bdevs_list": [ 00:20:23.734 { 00:20:23.734 "name": null, 00:20:23.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.734 "is_configured": false, 00:20:23.734 "data_offset": 2048, 00:20:23.734 "data_size": 63488 00:20:23.734 }, 00:20:23.734 { 00:20:23.734 "name": "pt2", 00:20:23.734 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:23.734 "is_configured": true, 00:20:23.734 "data_offset": 2048, 00:20:23.734 "data_size": 63488 00:20:23.734 }, 00:20:23.734 { 00:20:23.734 "name": "pt3", 00:20:23.734 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:23.734 "is_configured": true, 00:20:23.734 "data_offset": 2048, 00:20:23.734 "data_size": 63488 00:20:23.734 }, 00:20:23.734 { 00:20:23.734 "name": "pt4", 00:20:23.734 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:23.734 "is_configured": true, 00:20:23.734 "data_offset": 2048, 00:20:23.734 "data_size": 63488 00:20:23.734 } 00:20:23.734 ] 00:20:23.734 }' 00:20:23.734 06:12:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:23.734 06:12:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.302 06:12:54 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:24.302 [2024-06-11 06:12:54.801444] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.302 [2024-06-11 06:12:54.801481] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:24.302 [2024-06-11 06:12:54.801572] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:24.302 [2024-06-11 06:12:54.801657] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:24.302 [2024-06-11 06:12:54.801666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:20:24.302 06:12:54 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.302 06:12:54 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:24.561 06:12:55 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:24.561 06:12:55 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:24.561 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:24.561 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:24.561 06:12:55 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:24.820 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:24.820 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:24.820 06:12:55 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:24.820 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:24.820 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:24.820 06:12:55 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:25.080 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:25.080 06:12:55 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:25.080 06:12:55 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:25.080 06:12:55 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:25.080 06:12:55 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.340 [2024-06-11 06:12:55.790903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.340 [2024-06-11 06:12:55.791002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.340 [2024-06-11 06:12:55.791037] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:25.340 [2024-06-11 06:12:55.791074] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.340 [2024-06-11 06:12:55.793592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.340 [2024-06-11 06:12:55.793663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.340 [2024-06-11 06:12:55.793806] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:25.340 [2024-06-11 06:12:55.793862] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:25.340 pt2 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.340 "name": "raid_bdev1", 00:20:25.340 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:25.340 "strip_size_kb": 0, 00:20:25.340 "state": "configuring", 00:20:25.340 "raid_level": "raid1", 00:20:25.340 "superblock": true, 00:20:25.340 "num_base_bdevs": 4, 00:20:25.340 "num_base_bdevs_discovered": 1, 00:20:25.340 "num_base_bdevs_operational": 3, 00:20:25.340 "base_bdevs_list": [ 00:20:25.340 { 00:20:25.340 "name": null, 00:20:25.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.340 "is_configured": false, 00:20:25.340 "data_offset": 2048, 00:20:25.340 "data_size": 63488 00:20:25.340 }, 00:20:25.340 { 00:20:25.340 "name": "pt2", 00:20:25.340 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:25.340 "is_configured": true, 00:20:25.340 "data_offset": 2048, 00:20:25.340 "data_size": 63488 00:20:25.340 }, 00:20:25.340 { 00:20:25.340 "name": null, 00:20:25.340 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:25.340 "is_configured": false, 00:20:25.340 "data_offset": 2048, 00:20:25.340 "data_size": 63488 00:20:25.340 }, 00:20:25.340 { 00:20:25.340 "name": null, 00:20:25.340 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:25.340 "is_configured": false, 00:20:25.340 "data_offset": 2048, 00:20:25.340 "data_size": 63488 00:20:25.340 } 00:20:25.340 ] 00:20:25.340 }' 00:20:25.340 06:12:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.340 06:12:55 -- common/autotest_common.sh@10 -- # set +x 00:20:25.953 06:12:56 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:25.953 06:12:56 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:25.953 06:12:56 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:26.215 [2024-06-11 06:12:56.807101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:26.215 [2024-06-11 06:12:56.807212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:26.215 [2024-06-11 06:12:56.807257] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:26.215 [2024-06-11 06:12:56.807282] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:26.215 [2024-06-11 06:12:56.807842] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:26.215 [2024-06-11 06:12:56.807899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:26.215 [2024-06-11 06:12:56.808026] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:26.215 [2024-06-11 06:12:56.808049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:26.215 pt3 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.215 06:12:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.475 06:12:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:26.475 "name": "raid_bdev1", 00:20:26.475 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:26.475 "strip_size_kb": 0, 00:20:26.475 "state": "configuring", 00:20:26.475 "raid_level": "raid1", 00:20:26.475 "superblock": true, 00:20:26.475 "num_base_bdevs": 4, 00:20:26.475 "num_base_bdevs_discovered": 2, 00:20:26.475 "num_base_bdevs_operational": 3, 00:20:26.475 "base_bdevs_list": [ 00:20:26.475 { 00:20:26.475 "name": null, 00:20:26.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.475 "is_configured": false, 00:20:26.475 "data_offset": 2048, 00:20:26.475 "data_size": 63488 00:20:26.475 }, 00:20:26.475 { 00:20:26.475 "name": "pt2", 00:20:26.475 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:26.475 "is_configured": true, 00:20:26.475 "data_offset": 2048, 00:20:26.475 "data_size": 63488 00:20:26.475 }, 00:20:26.475 { 00:20:26.475 "name": "pt3", 00:20:26.475 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:26.475 "is_configured": true, 00:20:26.475 "data_offset": 2048, 00:20:26.475 "data_size": 63488 00:20:26.475 }, 00:20:26.475 { 00:20:26.475 "name": null, 00:20:26.475 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:26.475 "is_configured": false, 00:20:26.475 "data_offset": 2048, 00:20:26.475 "data_size": 63488 00:20:26.475 } 00:20:26.475 ] 00:20:26.475 }' 00:20:26.475 06:12:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:26.475 06:12:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.043 06:12:57 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:27.043 06:12:57 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:27.043 06:12:57 -- bdev/bdev_raid.sh@462 -- # i=3 00:20:27.043 06:12:57 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:27.303 [2024-06-11 06:12:57.897266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:27.303 [2024-06-11 06:12:57.897376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:27.303 [2024-06-11 06:12:57.897422] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:27.303 [2024-06-11 06:12:57.897446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:27.303 [2024-06-11 06:12:57.898020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:27.303 [2024-06-11 06:12:57.898063] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:27.303 [2024-06-11 06:12:57.898204] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:27.303 [2024-06-11 06:12:57.898230] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:27.303 [2024-06-11 06:12:57.898379] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:20:27.303 [2024-06-11 06:12:57.898395] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:27.303 [2024-06-11 06:12:57.898543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:20:27.303 [2024-06-11 06:12:57.898908] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:20:27.303 [2024-06-11 06:12:57.898927] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:20:27.303 [2024-06-11 06:12:57.899079] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.303 pt4 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.303 06:12:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.562 06:12:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.562 "name": "raid_bdev1", 00:20:27.562 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:27.562 "strip_size_kb": 0, 00:20:27.562 "state": "online", 00:20:27.562 "raid_level": "raid1", 00:20:27.562 "superblock": true, 00:20:27.562 "num_base_bdevs": 4, 00:20:27.562 "num_base_bdevs_discovered": 3, 00:20:27.562 "num_base_bdevs_operational": 3, 00:20:27.562 "base_bdevs_list": [ 00:20:27.562 { 00:20:27.562 "name": null, 00:20:27.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.562 "is_configured": false, 00:20:27.562 "data_offset": 2048, 00:20:27.562 "data_size": 63488 00:20:27.562 }, 00:20:27.562 { 00:20:27.562 "name": "pt2", 00:20:27.562 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:27.562 "is_configured": true, 00:20:27.562 "data_offset": 2048, 00:20:27.562 "data_size": 63488 00:20:27.562 }, 00:20:27.562 { 00:20:27.562 "name": "pt3", 00:20:27.562 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:27.562 "is_configured": true, 00:20:27.562 "data_offset": 2048, 00:20:27.562 "data_size": 63488 00:20:27.562 }, 00:20:27.562 { 00:20:27.562 "name": "pt4", 00:20:27.562 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:27.562 "is_configured": true, 00:20:27.562 "data_offset": 2048, 00:20:27.562 "data_size": 63488 00:20:27.562 } 00:20:27.562 ] 00:20:27.562 }' 00:20:27.562 06:12:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.562 06:12:58 -- common/autotest_common.sh@10 -- # set +x 00:20:28.131 06:12:58 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:20:28.131 06:12:58 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:28.390 [2024-06-11 06:12:58.967892] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:28.390 [2024-06-11 06:12:58.967933] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:28.390 [2024-06-11 06:12:58.968023] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:28.390 [2024-06-11 06:12:58.968103] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:28.390 [2024-06-11 06:12:58.968112] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:20:28.390 06:12:58 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.390 06:12:58 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:28.649 06:12:59 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:28.649 06:12:59 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:28.649 06:12:59 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:28.909 [2024-06-11 06:12:59.379991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:28.909 [2024-06-11 06:12:59.380107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:28.909 [2024-06-11 06:12:59.380149] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:28.909 [2024-06-11 06:12:59.380172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:28.909 [2024-06-11 06:12:59.382938] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:28.909 [2024-06-11 06:12:59.383026] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:28.909 [2024-06-11 06:12:59.383148] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:28.909 [2024-06-11 06:12:59.383195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:28.909 pt1 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.909 06:12:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.169 06:12:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.169 "name": "raid_bdev1", 00:20:29.169 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:29.169 "strip_size_kb": 0, 00:20:29.169 "state": "configuring", 00:20:29.169 "raid_level": "raid1", 00:20:29.169 "superblock": true, 00:20:29.169 "num_base_bdevs": 4, 00:20:29.169 "num_base_bdevs_discovered": 1, 00:20:29.169 "num_base_bdevs_operational": 4, 00:20:29.169 "base_bdevs_list": [ 00:20:29.169 { 00:20:29.169 "name": "pt1", 00:20:29.169 "uuid": "9653f530-9d97-5c44-a081-0fdb175cb0c1", 00:20:29.169 "is_configured": true, 00:20:29.169 "data_offset": 2048, 00:20:29.169 "data_size": 63488 00:20:29.169 }, 00:20:29.169 { 00:20:29.169 "name": null, 00:20:29.169 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:29.169 "is_configured": false, 00:20:29.169 "data_offset": 2048, 00:20:29.169 "data_size": 63488 00:20:29.169 }, 00:20:29.169 { 00:20:29.169 "name": null, 00:20:29.169 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:29.169 "is_configured": false, 00:20:29.169 "data_offset": 2048, 00:20:29.169 "data_size": 63488 00:20:29.169 }, 00:20:29.169 { 00:20:29.169 "name": null, 00:20:29.169 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:29.169 "is_configured": false, 00:20:29.169 "data_offset": 2048, 00:20:29.169 "data_size": 63488 00:20:29.169 } 00:20:29.169 ] 00:20:29.169 }' 00:20:29.169 06:12:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.169 06:12:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.737 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:29.737 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:29.737 06:13:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:29.997 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:29.997 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:29.997 06:13:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:29.997 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:29.997 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:29.997 06:13:00 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:30.256 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:30.256 06:13:00 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:30.256 06:13:00 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:30.256 06:13:00 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:30.515 [2024-06-11 06:13:01.032323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:30.515 [2024-06-11 06:13:01.032435] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.515 [2024-06-11 06:13:01.032495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:20:30.515 [2024-06-11 06:13:01.032524] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.515 [2024-06-11 06:13:01.033073] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.515 [2024-06-11 06:13:01.033133] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:30.515 [2024-06-11 06:13:01.033265] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:30.515 [2024-06-11 06:13:01.033277] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:30.515 [2024-06-11 06:13:01.033284] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.515 [2024-06-11 06:13:01.033314] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:20:30.515 [2024-06-11 06:13:01.033398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:30.515 pt4 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.515 06:13:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.775 06:13:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.775 "name": "raid_bdev1", 00:20:30.775 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:30.775 "strip_size_kb": 0, 00:20:30.775 "state": "configuring", 00:20:30.775 "raid_level": "raid1", 00:20:30.775 "superblock": true, 00:20:30.775 "num_base_bdevs": 4, 00:20:30.775 "num_base_bdevs_discovered": 1, 00:20:30.775 "num_base_bdevs_operational": 3, 00:20:30.775 "base_bdevs_list": [ 00:20:30.775 { 00:20:30.775 "name": null, 00:20:30.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.775 "is_configured": false, 00:20:30.775 "data_offset": 2048, 00:20:30.775 "data_size": 63488 00:20:30.775 }, 00:20:30.775 { 00:20:30.775 "name": null, 00:20:30.775 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:30.775 "is_configured": false, 00:20:30.775 "data_offset": 2048, 00:20:30.775 "data_size": 63488 00:20:30.775 }, 00:20:30.775 { 00:20:30.775 "name": null, 00:20:30.775 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:30.775 "is_configured": false, 00:20:30.775 "data_offset": 2048, 00:20:30.775 "data_size": 63488 00:20:30.775 }, 00:20:30.775 { 00:20:30.775 "name": "pt4", 00:20:30.775 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:30.775 "is_configured": true, 00:20:30.775 "data_offset": 2048, 00:20:30.775 "data_size": 63488 00:20:30.775 } 00:20:30.775 ] 00:20:30.775 }' 00:20:30.775 06:13:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.775 06:13:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.343 06:13:01 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:31.343 06:13:01 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:31.343 06:13:01 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:31.625 [2024-06-11 06:13:02.020491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:31.625 [2024-06-11 06:13:02.020611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.625 [2024-06-11 06:13:02.020649] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:20:31.626 [2024-06-11 06:13:02.020677] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.626 [2024-06-11 06:13:02.021248] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.626 [2024-06-11 06:13:02.021302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:31.626 [2024-06-11 06:13:02.021414] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:31.626 [2024-06-11 06:13:02.021436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:31.626 pt2 00:20:31.626 06:13:02 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:31.626 06:13:02 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:31.626 06:13:02 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:31.897 [2024-06-11 06:13:02.284572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:31.897 [2024-06-11 06:13:02.284655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.897 [2024-06-11 06:13:02.284689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:31.897 [2024-06-11 06:13:02.284722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.897 [2024-06-11 06:13:02.285261] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.897 [2024-06-11 06:13:02.285322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:31.897 [2024-06-11 06:13:02.285451] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:31.897 [2024-06-11 06:13:02.285474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:31.897 [2024-06-11 06:13:02.285606] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:20:31.897 [2024-06-11 06:13:02.285615] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:31.897 [2024-06-11 06:13:02.285720] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:20:31.897 [2024-06-11 06:13:02.286043] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:20:31.897 [2024-06-11 06:13:02.286053] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:20:31.897 [2024-06-11 06:13:02.286185] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.897 pt3 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.897 06:13:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.156 06:13:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.157 "name": "raid_bdev1", 00:20:32.157 "uuid": "faf43d97-d2c9-4933-a452-41c7f262d526", 00:20:32.157 "strip_size_kb": 0, 00:20:32.157 "state": "online", 00:20:32.157 "raid_level": "raid1", 00:20:32.157 "superblock": true, 00:20:32.157 "num_base_bdevs": 4, 00:20:32.157 "num_base_bdevs_discovered": 3, 00:20:32.157 "num_base_bdevs_operational": 3, 00:20:32.157 "base_bdevs_list": [ 00:20:32.157 { 00:20:32.157 "name": null, 00:20:32.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.157 "is_configured": false, 00:20:32.157 "data_offset": 2048, 00:20:32.157 "data_size": 63488 00:20:32.157 }, 00:20:32.157 { 00:20:32.157 "name": "pt2", 00:20:32.157 "uuid": "6c68a516-99e1-5017-b9fc-de86dbc3d5e0", 00:20:32.157 "is_configured": true, 00:20:32.157 "data_offset": 2048, 00:20:32.157 "data_size": 63488 00:20:32.157 }, 00:20:32.157 { 00:20:32.157 "name": "pt3", 00:20:32.157 "uuid": "c8faf11e-bfb8-5e59-bf70-550950669a9f", 00:20:32.157 "is_configured": true, 00:20:32.157 "data_offset": 2048, 00:20:32.157 "data_size": 63488 00:20:32.157 }, 00:20:32.157 { 00:20:32.157 "name": "pt4", 00:20:32.157 "uuid": "7edde881-e9fa-5c4c-bfb1-93cf90aca939", 00:20:32.157 "is_configured": true, 00:20:32.157 "data_offset": 2048, 00:20:32.157 "data_size": 63488 00:20:32.157 } 00:20:32.157 ] 00:20:32.157 }' 00:20:32.157 06:13:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.157 06:13:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.725 06:13:03 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:32.725 06:13:03 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:32.725 [2024-06-11 06:13:03.368979] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:32.984 06:13:03 -- bdev/bdev_raid.sh@506 -- # '[' faf43d97-d2c9-4933-a452-41c7f262d526 '!=' faf43d97-d2c9-4933-a452-41c7f262d526 ']' 00:20:32.984 06:13:03 -- bdev/bdev_raid.sh@511 -- # killprocess 122386 00:20:32.984 06:13:03 -- common/autotest_common.sh@926 -- # '[' -z 122386 ']' 00:20:32.984 06:13:03 -- common/autotest_common.sh@930 -- # kill -0 122386 00:20:32.984 06:13:03 -- common/autotest_common.sh@931 -- # uname 00:20:32.984 06:13:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:32.984 06:13:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122386 00:20:32.984 killing process with pid 122386 00:20:32.984 06:13:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:32.984 06:13:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:32.984 06:13:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122386' 00:20:32.984 06:13:03 -- common/autotest_common.sh@945 -- # kill 122386 00:20:32.984 [2024-06-11 06:13:03.412891] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.984 [2024-06-11 06:13:03.412978] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.984 [2024-06-11 06:13:03.413057] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.984 [2024-06-11 06:13:03.413070] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:20:32.984 06:13:03 -- common/autotest_common.sh@950 -- # wait 122386 00:20:33.243 [2024-06-11 06:13:03.814169] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:34.624 ************************************ 00:20:34.624 END TEST raid_superblock_test 00:20:34.624 ************************************ 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:34.624 00:20:34.624 real 0m20.899s 00:20:34.624 user 0m36.676s 00:20:34.624 sys 0m3.436s 00:20:34.624 06:13:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:34.624 06:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:34.624 06:13:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:34.624 06:13:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:34.624 06:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:34.624 ************************************ 00:20:34.624 START TEST raid_rebuild_test 00:20:34.624 ************************************ 00:20:34.624 06:13:05 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@544 -- # raid_pid=123048 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@545 -- # waitforlisten 123048 /var/tmp/spdk-raid.sock 00:20:34.624 06:13:05 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:34.624 06:13:05 -- common/autotest_common.sh@819 -- # '[' -z 123048 ']' 00:20:34.624 06:13:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:34.624 06:13:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:34.624 06:13:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:34.624 06:13:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:34.624 06:13:05 -- common/autotest_common.sh@10 -- # set +x 00:20:34.884 [2024-06-11 06:13:05.325437] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:34.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:34.884 Zero copy mechanism will not be used. 00:20:34.884 [2024-06-11 06:13:05.325622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123048 ] 00:20:34.884 [2024-06-11 06:13:05.509564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.143 [2024-06-11 06:13:05.749430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.403 [2024-06-11 06:13:05.975699] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.662 06:13:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:35.662 06:13:06 -- common/autotest_common.sh@852 -- # return 0 00:20:35.662 06:13:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:35.662 06:13:06 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:35.662 06:13:06 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:35.922 BaseBdev1 00:20:35.922 06:13:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:35.922 06:13:06 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:35.922 06:13:06 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:36.181 BaseBdev2 00:20:36.181 06:13:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:36.441 spare_malloc 00:20:36.441 06:13:07 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:36.700 spare_delay 00:20:36.700 06:13:07 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:36.959 [2024-06-11 06:13:07.420025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:36.959 [2024-06-11 06:13:07.420290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.959 [2024-06-11 06:13:07.420377] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:36.959 [2024-06-11 06:13:07.420502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.959 [2024-06-11 06:13:07.423373] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.959 [2024-06-11 06:13:07.423565] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:36.959 spare 00:20:36.959 06:13:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:36.959 [2024-06-11 06:13:07.592130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:36.959 [2024-06-11 06:13:07.594477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:36.959 [2024-06-11 06:13:07.594728] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:20:36.959 [2024-06-11 06:13:07.594773] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:36.959 [2024-06-11 06:13:07.595028] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:36.959 [2024-06-11 06:13:07.595508] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:20:36.959 [2024-06-11 06:13:07.595628] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:20:36.959 [2024-06-11 06:13:07.595896] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:37.219 "name": "raid_bdev1", 00:20:37.219 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:37.219 "strip_size_kb": 0, 00:20:37.219 "state": "online", 00:20:37.219 "raid_level": "raid1", 00:20:37.219 "superblock": false, 00:20:37.219 "num_base_bdevs": 2, 00:20:37.219 "num_base_bdevs_discovered": 2, 00:20:37.219 "num_base_bdevs_operational": 2, 00:20:37.219 "base_bdevs_list": [ 00:20:37.219 { 00:20:37.219 "name": "BaseBdev1", 00:20:37.219 "uuid": "f4745052-52d5-4c14-afee-ff35d03a28d5", 00:20:37.219 "is_configured": true, 00:20:37.219 "data_offset": 0, 00:20:37.219 "data_size": 65536 00:20:37.219 }, 00:20:37.219 { 00:20:37.219 "name": "BaseBdev2", 00:20:37.219 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:37.219 "is_configured": true, 00:20:37.219 "data_offset": 0, 00:20:37.219 "data_size": 65536 00:20:37.219 } 00:20:37.219 ] 00:20:37.219 }' 00:20:37.219 06:13:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:37.219 06:13:07 -- common/autotest_common.sh@10 -- # set +x 00:20:37.787 06:13:08 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:37.788 06:13:08 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:38.046 [2024-06-11 06:13:08.620512] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.046 06:13:08 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:38.046 06:13:08 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.046 06:13:08 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:38.305 06:13:08 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:38.305 06:13:08 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:38.305 06:13:08 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:38.305 06:13:08 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@12 -- # local i 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.305 06:13:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:38.563 [2024-06-11 06:13:09.012444] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:38.564 /dev/nbd0 00:20:38.564 06:13:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:38.564 06:13:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:38.564 06:13:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:38.564 06:13:09 -- common/autotest_common.sh@857 -- # local i 00:20:38.564 06:13:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:38.564 06:13:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:38.564 06:13:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:38.564 06:13:09 -- common/autotest_common.sh@861 -- # break 00:20:38.564 06:13:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:38.564 06:13:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:38.564 06:13:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.564 1+0 records in 00:20:38.564 1+0 records out 00:20:38.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376874 s, 10.9 MB/s 00:20:38.564 06:13:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.564 06:13:09 -- common/autotest_common.sh@874 -- # size=4096 00:20:38.564 06:13:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.564 06:13:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:38.564 06:13:09 -- common/autotest_common.sh@877 -- # return 0 00:20:38.564 06:13:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.564 06:13:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:38.564 06:13:09 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:38.564 06:13:09 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:38.564 06:13:09 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:42.753 65536+0 records in 00:20:42.753 65536+0 records out 00:20:42.753 33554432 bytes (34 MB, 32 MiB) copied, 3.66389 s, 9.2 MB/s 00:20:42.753 06:13:12 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:42.753 06:13:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:42.753 06:13:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:42.753 06:13:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:42.753 06:13:12 -- bdev/nbd_common.sh@51 -- # local i 00:20:42.753 06:13:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:42.753 06:13:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:42.753 [2024-06-11 06:13:12.992400] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@41 -- # break 00:20:42.753 06:13:13 -- bdev/nbd_common.sh@45 -- # return 0 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:42.753 [2024-06-11 06:13:13.180097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.753 06:13:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.013 06:13:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:43.013 "name": "raid_bdev1", 00:20:43.013 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:43.013 "strip_size_kb": 0, 00:20:43.013 "state": "online", 00:20:43.013 "raid_level": "raid1", 00:20:43.013 "superblock": false, 00:20:43.013 "num_base_bdevs": 2, 00:20:43.013 "num_base_bdevs_discovered": 1, 00:20:43.013 "num_base_bdevs_operational": 1, 00:20:43.013 "base_bdevs_list": [ 00:20:43.013 { 00:20:43.013 "name": null, 00:20:43.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.013 "is_configured": false, 00:20:43.013 "data_offset": 0, 00:20:43.013 "data_size": 65536 00:20:43.013 }, 00:20:43.013 { 00:20:43.013 "name": "BaseBdev2", 00:20:43.013 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:43.013 "is_configured": true, 00:20:43.013 "data_offset": 0, 00:20:43.013 "data_size": 65536 00:20:43.013 } 00:20:43.013 ] 00:20:43.013 }' 00:20:43.013 06:13:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:43.013 06:13:13 -- common/autotest_common.sh@10 -- # set +x 00:20:43.581 06:13:13 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:43.581 [2024-06-11 06:13:14.192213] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:43.581 [2024-06-11 06:13:14.192264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:43.581 [2024-06-11 06:13:14.208553] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09550 00:20:43.581 [2024-06-11 06:13:14.222965] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:43.581 06:13:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:44.959 "name": "raid_bdev1", 00:20:44.959 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:44.959 "strip_size_kb": 0, 00:20:44.959 "state": "online", 00:20:44.959 "raid_level": "raid1", 00:20:44.959 "superblock": false, 00:20:44.959 "num_base_bdevs": 2, 00:20:44.959 "num_base_bdevs_discovered": 2, 00:20:44.959 "num_base_bdevs_operational": 2, 00:20:44.959 "process": { 00:20:44.959 "type": "rebuild", 00:20:44.959 "target": "spare", 00:20:44.959 "progress": { 00:20:44.959 "blocks": 24576, 00:20:44.959 "percent": 37 00:20:44.959 } 00:20:44.959 }, 00:20:44.959 "base_bdevs_list": [ 00:20:44.959 { 00:20:44.959 "name": "spare", 00:20:44.959 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:44.959 "is_configured": true, 00:20:44.959 "data_offset": 0, 00:20:44.959 "data_size": 65536 00:20:44.959 }, 00:20:44.959 { 00:20:44.959 "name": "BaseBdev2", 00:20:44.959 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:44.959 "is_configured": true, 00:20:44.959 "data_offset": 0, 00:20:44.959 "data_size": 65536 00:20:44.959 } 00:20:44.959 ] 00:20:44.959 }' 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:44.959 06:13:15 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:45.218 [2024-06-11 06:13:15.772378] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:45.218 [2024-06-11 06:13:15.834343] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:45.218 [2024-06-11 06:13:15.834440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.477 06:13:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.477 06:13:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:45.477 "name": "raid_bdev1", 00:20:45.477 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:45.477 "strip_size_kb": 0, 00:20:45.477 "state": "online", 00:20:45.477 "raid_level": "raid1", 00:20:45.477 "superblock": false, 00:20:45.477 "num_base_bdevs": 2, 00:20:45.477 "num_base_bdevs_discovered": 1, 00:20:45.477 "num_base_bdevs_operational": 1, 00:20:45.477 "base_bdevs_list": [ 00:20:45.477 { 00:20:45.477 "name": null, 00:20:45.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.477 "is_configured": false, 00:20:45.477 "data_offset": 0, 00:20:45.477 "data_size": 65536 00:20:45.477 }, 00:20:45.477 { 00:20:45.477 "name": "BaseBdev2", 00:20:45.477 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:45.477 "is_configured": true, 00:20:45.477 "data_offset": 0, 00:20:45.477 "data_size": 65536 00:20:45.477 } 00:20:45.477 ] 00:20:45.477 }' 00:20:45.477 06:13:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:45.477 06:13:16 -- common/autotest_common.sh@10 -- # set +x 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.045 06:13:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.303 06:13:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:46.303 "name": "raid_bdev1", 00:20:46.303 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:46.303 "strip_size_kb": 0, 00:20:46.303 "state": "online", 00:20:46.303 "raid_level": "raid1", 00:20:46.303 "superblock": false, 00:20:46.303 "num_base_bdevs": 2, 00:20:46.303 "num_base_bdevs_discovered": 1, 00:20:46.303 "num_base_bdevs_operational": 1, 00:20:46.303 "base_bdevs_list": [ 00:20:46.303 { 00:20:46.303 "name": null, 00:20:46.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.303 "is_configured": false, 00:20:46.303 "data_offset": 0, 00:20:46.303 "data_size": 65536 00:20:46.303 }, 00:20:46.303 { 00:20:46.303 "name": "BaseBdev2", 00:20:46.303 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:46.303 "is_configured": true, 00:20:46.303 "data_offset": 0, 00:20:46.303 "data_size": 65536 00:20:46.303 } 00:20:46.303 ] 00:20:46.303 }' 00:20:46.303 06:13:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:46.303 06:13:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:46.303 06:13:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:46.304 06:13:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:46.304 06:13:16 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:46.562 [2024-06-11 06:13:17.067687] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:46.562 [2024-06-11 06:13:17.067741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:46.562 [2024-06-11 06:13:17.082061] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:20:46.562 [2024-06-11 06:13:17.084248] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:46.562 06:13:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.498 06:13:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.757 06:13:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:47.757 "name": "raid_bdev1", 00:20:47.757 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:47.757 "strip_size_kb": 0, 00:20:47.757 "state": "online", 00:20:47.757 "raid_level": "raid1", 00:20:47.757 "superblock": false, 00:20:47.757 "num_base_bdevs": 2, 00:20:47.757 "num_base_bdevs_discovered": 2, 00:20:47.757 "num_base_bdevs_operational": 2, 00:20:47.757 "process": { 00:20:47.757 "type": "rebuild", 00:20:47.757 "target": "spare", 00:20:47.757 "progress": { 00:20:47.757 "blocks": 24576, 00:20:47.757 "percent": 37 00:20:47.757 } 00:20:47.757 }, 00:20:47.757 "base_bdevs_list": [ 00:20:47.757 { 00:20:47.757 "name": "spare", 00:20:47.757 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:47.757 "is_configured": true, 00:20:47.757 "data_offset": 0, 00:20:47.757 "data_size": 65536 00:20:47.757 }, 00:20:47.757 { 00:20:47.757 "name": "BaseBdev2", 00:20:47.757 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:47.757 "is_configured": true, 00:20:47.757 "data_offset": 0, 00:20:47.757 "data_size": 65536 00:20:47.757 } 00:20:47.757 ] 00:20:47.757 }' 00:20:47.757 06:13:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:47.757 06:13:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:47.757 06:13:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@657 -- # local timeout=391 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.016 06:13:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.275 06:13:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.275 "name": "raid_bdev1", 00:20:48.275 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:48.275 "strip_size_kb": 0, 00:20:48.275 "state": "online", 00:20:48.275 "raid_level": "raid1", 00:20:48.275 "superblock": false, 00:20:48.275 "num_base_bdevs": 2, 00:20:48.275 "num_base_bdevs_discovered": 2, 00:20:48.275 "num_base_bdevs_operational": 2, 00:20:48.275 "process": { 00:20:48.275 "type": "rebuild", 00:20:48.275 "target": "spare", 00:20:48.275 "progress": { 00:20:48.275 "blocks": 30720, 00:20:48.275 "percent": 46 00:20:48.275 } 00:20:48.275 }, 00:20:48.275 "base_bdevs_list": [ 00:20:48.275 { 00:20:48.275 "name": "spare", 00:20:48.275 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:48.275 "is_configured": true, 00:20:48.275 "data_offset": 0, 00:20:48.275 "data_size": 65536 00:20:48.275 }, 00:20:48.275 { 00:20:48.275 "name": "BaseBdev2", 00:20:48.275 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:48.275 "is_configured": true, 00:20:48.275 "data_offset": 0, 00:20:48.275 "data_size": 65536 00:20:48.275 } 00:20:48.275 ] 00:20:48.275 }' 00:20:48.275 06:13:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.275 06:13:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.275 06:13:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.275 06:13:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.275 06:13:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.215 06:13:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.474 06:13:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:49.474 "name": "raid_bdev1", 00:20:49.474 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:49.474 "strip_size_kb": 0, 00:20:49.474 "state": "online", 00:20:49.474 "raid_level": "raid1", 00:20:49.474 "superblock": false, 00:20:49.474 "num_base_bdevs": 2, 00:20:49.474 "num_base_bdevs_discovered": 2, 00:20:49.474 "num_base_bdevs_operational": 2, 00:20:49.474 "process": { 00:20:49.474 "type": "rebuild", 00:20:49.474 "target": "spare", 00:20:49.474 "progress": { 00:20:49.474 "blocks": 57344, 00:20:49.474 "percent": 87 00:20:49.474 } 00:20:49.474 }, 00:20:49.474 "base_bdevs_list": [ 00:20:49.474 { 00:20:49.474 "name": "spare", 00:20:49.474 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:49.474 "is_configured": true, 00:20:49.474 "data_offset": 0, 00:20:49.474 "data_size": 65536 00:20:49.474 }, 00:20:49.474 { 00:20:49.474 "name": "BaseBdev2", 00:20:49.474 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:49.474 "is_configured": true, 00:20:49.474 "data_offset": 0, 00:20:49.474 "data_size": 65536 00:20:49.474 } 00:20:49.474 ] 00:20:49.474 }' 00:20:49.474 06:13:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:49.474 06:13:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:49.474 06:13:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:49.474 06:13:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:49.474 06:13:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:49.733 [2024-06-11 06:13:20.307500] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:49.733 [2024-06-11 06:13:20.307591] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:49.733 [2024-06-11 06:13:20.307665] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.671 06:13:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.930 06:13:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.930 "name": "raid_bdev1", 00:20:50.930 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:50.930 "strip_size_kb": 0, 00:20:50.930 "state": "online", 00:20:50.930 "raid_level": "raid1", 00:20:50.930 "superblock": false, 00:20:50.930 "num_base_bdevs": 2, 00:20:50.930 "num_base_bdevs_discovered": 2, 00:20:50.930 "num_base_bdevs_operational": 2, 00:20:50.930 "base_bdevs_list": [ 00:20:50.930 { 00:20:50.930 "name": "spare", 00:20:50.930 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:50.930 "is_configured": true, 00:20:50.930 "data_offset": 0, 00:20:50.930 "data_size": 65536 00:20:50.930 }, 00:20:50.930 { 00:20:50.930 "name": "BaseBdev2", 00:20:50.930 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:50.930 "is_configured": true, 00:20:50.931 "data_offset": 0, 00:20:50.931 "data_size": 65536 00:20:50.931 } 00:20:50.931 ] 00:20:50.931 }' 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@660 -- # break 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.931 06:13:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.190 "name": "raid_bdev1", 00:20:51.190 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:51.190 "strip_size_kb": 0, 00:20:51.190 "state": "online", 00:20:51.190 "raid_level": "raid1", 00:20:51.190 "superblock": false, 00:20:51.190 "num_base_bdevs": 2, 00:20:51.190 "num_base_bdevs_discovered": 2, 00:20:51.190 "num_base_bdevs_operational": 2, 00:20:51.190 "base_bdevs_list": [ 00:20:51.190 { 00:20:51.190 "name": "spare", 00:20:51.190 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:51.190 "is_configured": true, 00:20:51.190 "data_offset": 0, 00:20:51.190 "data_size": 65536 00:20:51.190 }, 00:20:51.190 { 00:20:51.190 "name": "BaseBdev2", 00:20:51.190 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:51.190 "is_configured": true, 00:20:51.190 "data_offset": 0, 00:20:51.190 "data_size": 65536 00:20:51.190 } 00:20:51.190 ] 00:20:51.190 }' 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.190 06:13:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.449 06:13:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.449 "name": "raid_bdev1", 00:20:51.449 "uuid": "f5568009-930a-4013-b40e-2b304ac66c93", 00:20:51.449 "strip_size_kb": 0, 00:20:51.449 "state": "online", 00:20:51.449 "raid_level": "raid1", 00:20:51.449 "superblock": false, 00:20:51.449 "num_base_bdevs": 2, 00:20:51.449 "num_base_bdevs_discovered": 2, 00:20:51.449 "num_base_bdevs_operational": 2, 00:20:51.449 "base_bdevs_list": [ 00:20:51.449 { 00:20:51.449 "name": "spare", 00:20:51.449 "uuid": "b8697915-2635-5bb5-bd10-bfede0a338a1", 00:20:51.449 "is_configured": true, 00:20:51.449 "data_offset": 0, 00:20:51.449 "data_size": 65536 00:20:51.449 }, 00:20:51.449 { 00:20:51.449 "name": "BaseBdev2", 00:20:51.449 "uuid": "dabbf17f-1997-4b02-9314-a28ebe783928", 00:20:51.449 "is_configured": true, 00:20:51.449 "data_offset": 0, 00:20:51.449 "data_size": 65536 00:20:51.449 } 00:20:51.449 ] 00:20:51.449 }' 00:20:51.449 06:13:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.449 06:13:21 -- common/autotest_common.sh@10 -- # set +x 00:20:52.035 06:13:22 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:52.336 [2024-06-11 06:13:22.714005] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.336 [2024-06-11 06:13:22.714049] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.336 [2024-06-11 06:13:22.714154] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.336 [2024-06-11 06:13:22.714250] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.336 [2024-06-11 06:13:22.714260] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:20:52.336 06:13:22 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.336 06:13:22 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:52.336 06:13:22 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:52.336 06:13:22 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:52.336 06:13:22 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@12 -- # local i 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:52.336 06:13:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:52.595 /dev/nbd0 00:20:52.595 06:13:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:52.595 06:13:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:52.595 06:13:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:52.595 06:13:23 -- common/autotest_common.sh@857 -- # local i 00:20:52.595 06:13:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:52.595 06:13:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:52.595 06:13:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:52.595 06:13:23 -- common/autotest_common.sh@861 -- # break 00:20:52.595 06:13:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:52.595 06:13:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:52.595 06:13:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:52.595 1+0 records in 00:20:52.595 1+0 records out 00:20:52.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571422 s, 7.2 MB/s 00:20:52.595 06:13:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.595 06:13:23 -- common/autotest_common.sh@874 -- # size=4096 00:20:52.595 06:13:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:52.595 06:13:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:52.595 06:13:23 -- common/autotest_common.sh@877 -- # return 0 00:20:52.595 06:13:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:52.595 06:13:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:52.595 06:13:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:52.854 /dev/nbd1 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:53.113 06:13:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:53.113 06:13:23 -- common/autotest_common.sh@857 -- # local i 00:20:53.113 06:13:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:53.113 06:13:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:53.113 06:13:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:53.113 06:13:23 -- common/autotest_common.sh@861 -- # break 00:20:53.113 06:13:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:53.113 06:13:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:53.113 06:13:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:53.113 1+0 records in 00:20:53.113 1+0 records out 00:20:53.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480239 s, 8.5 MB/s 00:20:53.113 06:13:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.113 06:13:23 -- common/autotest_common.sh@874 -- # size=4096 00:20:53.113 06:13:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:53.113 06:13:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:53.113 06:13:23 -- common/autotest_common.sh@877 -- # return 0 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:53.113 06:13:23 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:53.113 06:13:23 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@51 -- # local i 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.113 06:13:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:53.372 06:13:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:53.372 06:13:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:53.372 06:13:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:53.372 06:13:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.372 06:13:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.373 06:13:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:53.373 06:13:23 -- bdev/nbd_common.sh@41 -- # break 00:20:53.373 06:13:23 -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.373 06:13:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:53.373 06:13:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@41 -- # break 00:20:53.632 06:13:24 -- bdev/nbd_common.sh@45 -- # return 0 00:20:53.632 06:13:24 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:53.632 06:13:24 -- bdev/bdev_raid.sh@709 -- # killprocess 123048 00:20:53.632 06:13:24 -- common/autotest_common.sh@926 -- # '[' -z 123048 ']' 00:20:53.632 06:13:24 -- common/autotest_common.sh@930 -- # kill -0 123048 00:20:53.632 06:13:24 -- common/autotest_common.sh@931 -- # uname 00:20:53.632 06:13:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:53.632 06:13:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123048 00:20:53.632 06:13:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:53.632 06:13:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:53.632 06:13:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123048' 00:20:53.632 killing process with pid 123048 00:20:53.632 06:13:24 -- common/autotest_common.sh@945 -- # kill 123048 00:20:53.632 Received shutdown signal, test time was about 60.000000 seconds 00:20:53.632 00:20:53.632 Latency(us) 00:20:53.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.632 =================================================================================================================== 00:20:53.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:53.632 [2024-06-11 06:13:24.231715] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:53.632 06:13:24 -- common/autotest_common.sh@950 -- # wait 123048 00:20:53.891 [2024-06-11 06:13:24.535657] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:55.271 06:13:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:55.530 00:20:55.530 real 0m20.683s 00:20:55.530 user 0m27.462s 00:20:55.530 sys 0m4.249s 00:20:55.530 06:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:55.530 06:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.530 ************************************ 00:20:55.530 END TEST raid_rebuild_test 00:20:55.530 ************************************ 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:55.530 06:13:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:55.530 06:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:55.530 06:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.530 ************************************ 00:20:55.530 START TEST raid_rebuild_test_sb 00:20:55.530 ************************************ 00:20:55.530 06:13:25 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=123579 00:20:55.530 06:13:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 123579 /var/tmp/spdk-raid.sock 00:20:55.531 06:13:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:55.531 06:13:25 -- common/autotest_common.sh@819 -- # '[' -z 123579 ']' 00:20:55.531 06:13:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:55.531 06:13:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.531 06:13:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:55.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:55.531 06:13:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.531 06:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.531 [2024-06-11 06:13:26.075866] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:20:55.531 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:55.531 Zero copy mechanism will not be used. 00:20:55.531 [2024-06-11 06:13:26.076059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123579 ] 00:20:55.790 [2024-06-11 06:13:26.265219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.049 [2024-06-11 06:13:26.488465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.309 [2024-06-11 06:13:26.705551] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:56.309 06:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.309 06:13:26 -- common/autotest_common.sh@852 -- # return 0 00:20:56.309 06:13:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:56.309 06:13:26 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:56.309 06:13:26 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:56.568 BaseBdev1_malloc 00:20:56.568 06:13:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:56.828 [2024-06-11 06:13:27.348622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:56.828 [2024-06-11 06:13:27.348747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.828 [2024-06-11 06:13:27.348786] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:56.828 [2024-06-11 06:13:27.348845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.828 [2024-06-11 06:13:27.351615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.828 [2024-06-11 06:13:27.351683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:56.828 BaseBdev1 00:20:56.828 06:13:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:56.828 06:13:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:56.828 06:13:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:57.087 BaseBdev2_malloc 00:20:57.087 06:13:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:57.346 [2024-06-11 06:13:27.823156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:57.346 [2024-06-11 06:13:27.823264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.346 [2024-06-11 06:13:27.823309] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:57.346 [2024-06-11 06:13:27.823368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.346 [2024-06-11 06:13:27.825972] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.346 [2024-06-11 06:13:27.826020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:57.346 BaseBdev2 00:20:57.346 06:13:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:57.605 spare_malloc 00:20:57.605 06:13:28 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:57.605 spare_delay 00:20:57.605 06:13:28 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:57.865 [2024-06-11 06:13:28.373599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:57.865 [2024-06-11 06:13:28.373690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.865 [2024-06-11 06:13:28.373738] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:20:57.865 [2024-06-11 06:13:28.373783] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.865 [2024-06-11 06:13:28.376445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.865 [2024-06-11 06:13:28.376503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:57.865 spare 00:20:57.865 06:13:28 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:58.124 [2024-06-11 06:13:28.613747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:58.124 [2024-06-11 06:13:28.616078] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.124 [2024-06-11 06:13:28.616298] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:58.124 [2024-06-11 06:13:28.616310] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:58.124 [2024-06-11 06:13:28.616496] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:58.124 [2024-06-11 06:13:28.616891] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:58.124 [2024-06-11 06:13:28.616911] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:20:58.124 [2024-06-11 06:13:28.617088] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.124 06:13:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.383 06:13:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:58.383 "name": "raid_bdev1", 00:20:58.383 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:20:58.383 "strip_size_kb": 0, 00:20:58.383 "state": "online", 00:20:58.383 "raid_level": "raid1", 00:20:58.383 "superblock": true, 00:20:58.383 "num_base_bdevs": 2, 00:20:58.383 "num_base_bdevs_discovered": 2, 00:20:58.383 "num_base_bdevs_operational": 2, 00:20:58.383 "base_bdevs_list": [ 00:20:58.383 { 00:20:58.383 "name": "BaseBdev1", 00:20:58.383 "uuid": "57038825-bc39-56ad-b39c-64cd8c8ea521", 00:20:58.383 "is_configured": true, 00:20:58.383 "data_offset": 2048, 00:20:58.383 "data_size": 63488 00:20:58.383 }, 00:20:58.383 { 00:20:58.383 "name": "BaseBdev2", 00:20:58.383 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:20:58.383 "is_configured": true, 00:20:58.383 "data_offset": 2048, 00:20:58.383 "data_size": 63488 00:20:58.383 } 00:20:58.383 ] 00:20:58.383 }' 00:20:58.383 06:13:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:58.383 06:13:28 -- common/autotest_common.sh@10 -- # set +x 00:20:58.951 06:13:29 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:58.951 06:13:29 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:59.209 [2024-06-11 06:13:29.686082] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:59.209 06:13:29 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:59.209 06:13:29 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.209 06:13:29 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:59.468 06:13:29 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:59.468 06:13:29 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:59.468 06:13:29 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:59.468 06:13:29 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@12 -- # local i 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.468 06:13:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:59.750 [2024-06-11 06:13:30.181963] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:59.750 /dev/nbd0 00:20:59.750 06:13:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:59.750 06:13:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:59.750 06:13:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:59.750 06:13:30 -- common/autotest_common.sh@857 -- # local i 00:20:59.750 06:13:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:59.750 06:13:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:59.750 06:13:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:59.750 06:13:30 -- common/autotest_common.sh@861 -- # break 00:20:59.750 06:13:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:59.750 06:13:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:59.750 06:13:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:59.750 1+0 records in 00:20:59.750 1+0 records out 00:20:59.750 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299323 s, 13.7 MB/s 00:20:59.750 06:13:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.750 06:13:30 -- common/autotest_common.sh@874 -- # size=4096 00:20:59.750 06:13:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:59.750 06:13:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:59.750 06:13:30 -- common/autotest_common.sh@877 -- # return 0 00:20:59.750 06:13:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:59.750 06:13:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:59.750 06:13:30 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:59.750 06:13:30 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:59.750 06:13:30 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:05.027 63488+0 records in 00:21:05.027 63488+0 records out 00:21:05.027 32505856 bytes (33 MB, 31 MiB) copied, 4.52827 s, 7.2 MB/s 00:21:05.027 06:13:34 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:05.027 06:13:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:05.027 06:13:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:05.027 06:13:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:05.027 06:13:34 -- bdev/nbd_common.sh@51 -- # local i 00:21:05.027 06:13:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:05.027 06:13:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:05.027 [2024-06-11 06:13:35.036126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@41 -- # break 00:21:05.027 06:13:35 -- bdev/nbd_common.sh@45 -- # return 0 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:05.027 [2024-06-11 06:13:35.195784] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.027 "name": "raid_bdev1", 00:21:05.027 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:05.027 "strip_size_kb": 0, 00:21:05.027 "state": "online", 00:21:05.027 "raid_level": "raid1", 00:21:05.027 "superblock": true, 00:21:05.027 "num_base_bdevs": 2, 00:21:05.027 "num_base_bdevs_discovered": 1, 00:21:05.027 "num_base_bdevs_operational": 1, 00:21:05.027 "base_bdevs_list": [ 00:21:05.027 { 00:21:05.027 "name": null, 00:21:05.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.027 "is_configured": false, 00:21:05.027 "data_offset": 2048, 00:21:05.027 "data_size": 63488 00:21:05.027 }, 00:21:05.027 { 00:21:05.027 "name": "BaseBdev2", 00:21:05.027 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:05.027 "is_configured": true, 00:21:05.027 "data_offset": 2048, 00:21:05.027 "data_size": 63488 00:21:05.027 } 00:21:05.027 ] 00:21:05.027 }' 00:21:05.027 06:13:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.027 06:13:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.595 06:13:35 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.595 [2024-06-11 06:13:36.148009] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:05.595 [2024-06-11 06:13:36.148062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.595 [2024-06-11 06:13:36.163265] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:21:05.595 [2024-06-11 06:13:36.165576] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:05.595 06:13:36 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.546 06:13:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.805 06:13:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:06.805 "name": "raid_bdev1", 00:21:06.805 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:06.805 "strip_size_kb": 0, 00:21:06.805 "state": "online", 00:21:06.805 "raid_level": "raid1", 00:21:06.805 "superblock": true, 00:21:06.805 "num_base_bdevs": 2, 00:21:06.805 "num_base_bdevs_discovered": 2, 00:21:06.805 "num_base_bdevs_operational": 2, 00:21:06.805 "process": { 00:21:06.805 "type": "rebuild", 00:21:06.805 "target": "spare", 00:21:06.805 "progress": { 00:21:06.805 "blocks": 24576, 00:21:06.805 "percent": 38 00:21:06.805 } 00:21:06.805 }, 00:21:06.805 "base_bdevs_list": [ 00:21:06.805 { 00:21:06.805 "name": "spare", 00:21:06.805 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:06.805 "is_configured": true, 00:21:06.805 "data_offset": 2048, 00:21:06.805 "data_size": 63488 00:21:06.805 }, 00:21:06.805 { 00:21:06.805 "name": "BaseBdev2", 00:21:06.805 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:06.805 "is_configured": true, 00:21:06.805 "data_offset": 2048, 00:21:06.805 "data_size": 63488 00:21:06.805 } 00:21:06.805 ] 00:21:06.805 }' 00:21:06.805 06:13:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.064 06:13:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.064 06:13:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.064 06:13:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.064 06:13:37 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:07.064 [2024-06-11 06:13:37.659056] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.064 [2024-06-11 06:13:37.676374] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.064 [2024-06-11 06:13:37.676465] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.323 06:13:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.582 06:13:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.582 "name": "raid_bdev1", 00:21:07.582 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:07.582 "strip_size_kb": 0, 00:21:07.582 "state": "online", 00:21:07.582 "raid_level": "raid1", 00:21:07.582 "superblock": true, 00:21:07.582 "num_base_bdevs": 2, 00:21:07.583 "num_base_bdevs_discovered": 1, 00:21:07.583 "num_base_bdevs_operational": 1, 00:21:07.583 "base_bdevs_list": [ 00:21:07.583 { 00:21:07.583 "name": null, 00:21:07.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.583 "is_configured": false, 00:21:07.583 "data_offset": 2048, 00:21:07.583 "data_size": 63488 00:21:07.583 }, 00:21:07.583 { 00:21:07.583 "name": "BaseBdev2", 00:21:07.583 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:07.583 "is_configured": true, 00:21:07.583 "data_offset": 2048, 00:21:07.583 "data_size": 63488 00:21:07.583 } 00:21:07.583 ] 00:21:07.583 }' 00:21:07.583 06:13:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.583 06:13:37 -- common/autotest_common.sh@10 -- # set +x 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.151 06:13:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.410 06:13:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.410 "name": "raid_bdev1", 00:21:08.410 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:08.410 "strip_size_kb": 0, 00:21:08.410 "state": "online", 00:21:08.410 "raid_level": "raid1", 00:21:08.410 "superblock": true, 00:21:08.410 "num_base_bdevs": 2, 00:21:08.410 "num_base_bdevs_discovered": 1, 00:21:08.410 "num_base_bdevs_operational": 1, 00:21:08.410 "base_bdevs_list": [ 00:21:08.410 { 00:21:08.410 "name": null, 00:21:08.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.410 "is_configured": false, 00:21:08.410 "data_offset": 2048, 00:21:08.410 "data_size": 63488 00:21:08.410 }, 00:21:08.410 { 00:21:08.410 "name": "BaseBdev2", 00:21:08.410 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:08.410 "is_configured": true, 00:21:08.410 "data_offset": 2048, 00:21:08.410 "data_size": 63488 00:21:08.410 } 00:21:08.410 ] 00:21:08.410 }' 00:21:08.410 06:13:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.410 06:13:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:08.410 06:13:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.410 06:13:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:08.410 06:13:38 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.669 [2024-06-11 06:13:39.131932] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:08.669 [2024-06-11 06:13:39.131984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.669 [2024-06-11 06:13:39.147400] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:21:08.669 [2024-06-11 06:13:39.149669] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:08.669 06:13:39 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.605 06:13:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:09.866 "name": "raid_bdev1", 00:21:09.866 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:09.866 "strip_size_kb": 0, 00:21:09.866 "state": "online", 00:21:09.866 "raid_level": "raid1", 00:21:09.866 "superblock": true, 00:21:09.866 "num_base_bdevs": 2, 00:21:09.866 "num_base_bdevs_discovered": 2, 00:21:09.866 "num_base_bdevs_operational": 2, 00:21:09.866 "process": { 00:21:09.866 "type": "rebuild", 00:21:09.866 "target": "spare", 00:21:09.866 "progress": { 00:21:09.866 "blocks": 24576, 00:21:09.866 "percent": 38 00:21:09.866 } 00:21:09.866 }, 00:21:09.866 "base_bdevs_list": [ 00:21:09.866 { 00:21:09.866 "name": "spare", 00:21:09.866 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:09.866 "is_configured": true, 00:21:09.866 "data_offset": 2048, 00:21:09.866 "data_size": 63488 00:21:09.866 }, 00:21:09.866 { 00:21:09.866 "name": "BaseBdev2", 00:21:09.866 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:09.866 "is_configured": true, 00:21:09.866 "data_offset": 2048, 00:21:09.866 "data_size": 63488 00:21:09.866 } 00:21:09.866 ] 00:21:09.866 }' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:09.866 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@657 -- # local timeout=413 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.866 06:13:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.125 06:13:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.125 "name": "raid_bdev1", 00:21:10.125 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:10.125 "strip_size_kb": 0, 00:21:10.125 "state": "online", 00:21:10.125 "raid_level": "raid1", 00:21:10.125 "superblock": true, 00:21:10.125 "num_base_bdevs": 2, 00:21:10.125 "num_base_bdevs_discovered": 2, 00:21:10.125 "num_base_bdevs_operational": 2, 00:21:10.125 "process": { 00:21:10.125 "type": "rebuild", 00:21:10.125 "target": "spare", 00:21:10.125 "progress": { 00:21:10.125 "blocks": 30720, 00:21:10.125 "percent": 48 00:21:10.125 } 00:21:10.125 }, 00:21:10.125 "base_bdevs_list": [ 00:21:10.125 { 00:21:10.125 "name": "spare", 00:21:10.125 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:10.125 "is_configured": true, 00:21:10.125 "data_offset": 2048, 00:21:10.125 "data_size": 63488 00:21:10.125 }, 00:21:10.125 { 00:21:10.125 "name": "BaseBdev2", 00:21:10.125 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:10.125 "is_configured": true, 00:21:10.125 "data_offset": 2048, 00:21:10.125 "data_size": 63488 00:21:10.125 } 00:21:10.125 ] 00:21:10.125 }' 00:21:10.125 06:13:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.125 06:13:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.125 06:13:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.383 06:13:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.383 06:13:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.321 "name": "raid_bdev1", 00:21:11.321 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:11.321 "strip_size_kb": 0, 00:21:11.321 "state": "online", 00:21:11.321 "raid_level": "raid1", 00:21:11.321 "superblock": true, 00:21:11.321 "num_base_bdevs": 2, 00:21:11.321 "num_base_bdevs_discovered": 2, 00:21:11.321 "num_base_bdevs_operational": 2, 00:21:11.321 "process": { 00:21:11.321 "type": "rebuild", 00:21:11.321 "target": "spare", 00:21:11.321 "progress": { 00:21:11.321 "blocks": 55296, 00:21:11.321 "percent": 87 00:21:11.321 } 00:21:11.321 }, 00:21:11.321 "base_bdevs_list": [ 00:21:11.321 { 00:21:11.321 "name": "spare", 00:21:11.321 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:11.321 "is_configured": true, 00:21:11.321 "data_offset": 2048, 00:21:11.321 "data_size": 63488 00:21:11.321 }, 00:21:11.321 { 00:21:11.321 "name": "BaseBdev2", 00:21:11.321 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:11.321 "is_configured": true, 00:21:11.321 "data_offset": 2048, 00:21:11.321 "data_size": 63488 00:21:11.321 } 00:21:11.321 ] 00:21:11.321 }' 00:21:11.321 06:13:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.579 06:13:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.579 06:13:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.579 06:13:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.579 06:13:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:11.838 [2024-06-11 06:13:42.272376] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:11.838 [2024-06-11 06:13:42.272605] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:11.838 [2024-06-11 06:13:42.272858] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:12.811 06:13:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:12.811 06:13:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.811 06:13:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:12.811 06:13:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:12.812 "name": "raid_bdev1", 00:21:12.812 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:12.812 "strip_size_kb": 0, 00:21:12.812 "state": "online", 00:21:12.812 "raid_level": "raid1", 00:21:12.812 "superblock": true, 00:21:12.812 "num_base_bdevs": 2, 00:21:12.812 "num_base_bdevs_discovered": 2, 00:21:12.812 "num_base_bdevs_operational": 2, 00:21:12.812 "base_bdevs_list": [ 00:21:12.812 { 00:21:12.812 "name": "spare", 00:21:12.812 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:12.812 "is_configured": true, 00:21:12.812 "data_offset": 2048, 00:21:12.812 "data_size": 63488 00:21:12.812 }, 00:21:12.812 { 00:21:12.812 "name": "BaseBdev2", 00:21:12.812 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:12.812 "is_configured": true, 00:21:12.812 "data_offset": 2048, 00:21:12.812 "data_size": 63488 00:21:12.812 } 00:21:12.812 ] 00:21:12.812 }' 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@660 -- # break 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.812 06:13:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.071 06:13:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.071 "name": "raid_bdev1", 00:21:13.071 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:13.071 "strip_size_kb": 0, 00:21:13.071 "state": "online", 00:21:13.071 "raid_level": "raid1", 00:21:13.071 "superblock": true, 00:21:13.071 "num_base_bdevs": 2, 00:21:13.071 "num_base_bdevs_discovered": 2, 00:21:13.071 "num_base_bdevs_operational": 2, 00:21:13.071 "base_bdevs_list": [ 00:21:13.071 { 00:21:13.071 "name": "spare", 00:21:13.071 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:13.071 "is_configured": true, 00:21:13.071 "data_offset": 2048, 00:21:13.071 "data_size": 63488 00:21:13.071 }, 00:21:13.071 { 00:21:13.071 "name": "BaseBdev2", 00:21:13.071 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:13.071 "is_configured": true, 00:21:13.071 "data_offset": 2048, 00:21:13.071 "data_size": 63488 00:21:13.071 } 00:21:13.071 ] 00:21:13.071 }' 00:21:13.071 06:13:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.071 06:13:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:13.071 06:13:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.329 06:13:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.588 06:13:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.588 "name": "raid_bdev1", 00:21:13.588 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:13.588 "strip_size_kb": 0, 00:21:13.588 "state": "online", 00:21:13.588 "raid_level": "raid1", 00:21:13.588 "superblock": true, 00:21:13.588 "num_base_bdevs": 2, 00:21:13.588 "num_base_bdevs_discovered": 2, 00:21:13.588 "num_base_bdevs_operational": 2, 00:21:13.588 "base_bdevs_list": [ 00:21:13.588 { 00:21:13.588 "name": "spare", 00:21:13.588 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:13.588 "is_configured": true, 00:21:13.588 "data_offset": 2048, 00:21:13.588 "data_size": 63488 00:21:13.588 }, 00:21:13.588 { 00:21:13.589 "name": "BaseBdev2", 00:21:13.589 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:13.589 "is_configured": true, 00:21:13.589 "data_offset": 2048, 00:21:13.589 "data_size": 63488 00:21:13.589 } 00:21:13.589 ] 00:21:13.589 }' 00:21:13.589 06:13:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.589 06:13:43 -- common/autotest_common.sh@10 -- # set +x 00:21:14.157 06:13:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:14.416 [2024-06-11 06:13:44.875250] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.416 [2024-06-11 06:13:44.875462] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:14.416 [2024-06-11 06:13:44.875729] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:14.416 [2024-06-11 06:13:44.875904] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:14.416 [2024-06-11 06:13:44.876005] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:14.416 06:13:44 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.416 06:13:44 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:14.675 06:13:45 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:14.675 06:13:45 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:14.675 06:13:45 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@12 -- # local i 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:14.675 06:13:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:14.934 /dev/nbd0 00:21:14.934 06:13:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:14.934 06:13:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:14.934 06:13:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:14.934 06:13:45 -- common/autotest_common.sh@857 -- # local i 00:21:14.934 06:13:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:14.934 06:13:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:14.934 06:13:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:14.934 06:13:45 -- common/autotest_common.sh@861 -- # break 00:21:14.934 06:13:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:14.934 06:13:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:14.934 06:13:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:14.934 1+0 records in 00:21:14.934 1+0 records out 00:21:14.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464664 s, 8.8 MB/s 00:21:14.934 06:13:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.934 06:13:45 -- common/autotest_common.sh@874 -- # size=4096 00:21:14.934 06:13:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:14.934 06:13:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:14.934 06:13:45 -- common/autotest_common.sh@877 -- # return 0 00:21:14.934 06:13:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:14.934 06:13:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:14.934 06:13:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:15.194 /dev/nbd1 00:21:15.194 06:13:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.194 06:13:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.194 06:13:45 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:15.194 06:13:45 -- common/autotest_common.sh@857 -- # local i 00:21:15.194 06:13:45 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:15.194 06:13:45 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:15.194 06:13:45 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:15.194 06:13:45 -- common/autotest_common.sh@861 -- # break 00:21:15.194 06:13:45 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:15.194 06:13:45 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:15.194 06:13:45 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.194 1+0 records in 00:21:15.194 1+0 records out 00:21:15.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026363 s, 15.5 MB/s 00:21:15.194 06:13:45 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.194 06:13:45 -- common/autotest_common.sh@874 -- # size=4096 00:21:15.194 06:13:45 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.194 06:13:45 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:15.194 06:13:45 -- common/autotest_common.sh@877 -- # return 0 00:21:15.194 06:13:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.194 06:13:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:15.194 06:13:45 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:15.453 06:13:45 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:15.453 06:13:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.453 06:13:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:15.453 06:13:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.453 06:13:45 -- bdev/nbd_common.sh@51 -- # local i 00:21:15.453 06:13:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.453 06:13:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@41 -- # break 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.712 06:13:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:15.971 06:13:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:15.971 06:13:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:15.971 06:13:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:15.971 06:13:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:15.972 06:13:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:15.972 06:13:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:15.972 06:13:46 -- bdev/nbd_common.sh@41 -- # break 00:21:15.972 06:13:46 -- bdev/nbd_common.sh@45 -- # return 0 00:21:15.972 06:13:46 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:15.972 06:13:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:15.972 06:13:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:15.972 06:13:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:16.231 06:13:46 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:16.231 [2024-06-11 06:13:46.846459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:16.231 [2024-06-11 06:13:46.846570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.231 [2024-06-11 06:13:46.846621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:21:16.231 [2024-06-11 06:13:46.846651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.231 [2024-06-11 06:13:46.849352] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.231 [2024-06-11 06:13:46.849441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:16.231 [2024-06-11 06:13:46.849576] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:16.231 [2024-06-11 06:13:46.849635] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:16.231 BaseBdev1 00:21:16.231 06:13:46 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:16.231 06:13:46 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:16.231 06:13:46 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:16.490 06:13:47 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:16.749 [2024-06-11 06:13:47.270574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:16.749 [2024-06-11 06:13:47.270679] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.749 [2024-06-11 06:13:47.270737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:16.749 [2024-06-11 06:13:47.270767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.749 [2024-06-11 06:13:47.271259] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.749 [2024-06-11 06:13:47.271322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:16.749 [2024-06-11 06:13:47.271449] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:16.749 [2024-06-11 06:13:47.271462] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:16.749 [2024-06-11 06:13:47.271470] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.749 [2024-06-11 06:13:47.271496] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:21:16.749 [2024-06-11 06:13:47.271582] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:16.749 BaseBdev2 00:21:16.749 06:13:47 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:17.009 [2024-06-11 06:13:47.618678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:17.009 [2024-06-11 06:13:47.618771] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.009 [2024-06-11 06:13:47.618819] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:17.009 [2024-06-11 06:13:47.618843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.009 [2024-06-11 06:13:47.619403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.009 [2024-06-11 06:13:47.619456] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:17.009 [2024-06-11 06:13:47.619585] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:17.009 [2024-06-11 06:13:47.619623] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:17.009 spare 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.009 06:13:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.271 [2024-06-11 06:13:47.719736] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:17.271 [2024-06-11 06:13:47.719762] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:17.271 [2024-06-11 06:13:47.719921] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:21:17.271 [2024-06-11 06:13:47.720356] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:17.271 [2024-06-11 06:13:47.720376] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:17.271 [2024-06-11 06:13:47.720539] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.271 06:13:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:17.271 "name": "raid_bdev1", 00:21:17.271 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:17.271 "strip_size_kb": 0, 00:21:17.271 "state": "online", 00:21:17.271 "raid_level": "raid1", 00:21:17.271 "superblock": true, 00:21:17.271 "num_base_bdevs": 2, 00:21:17.271 "num_base_bdevs_discovered": 2, 00:21:17.271 "num_base_bdevs_operational": 2, 00:21:17.271 "base_bdevs_list": [ 00:21:17.271 { 00:21:17.271 "name": "spare", 00:21:17.271 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:17.271 "is_configured": true, 00:21:17.271 "data_offset": 2048, 00:21:17.271 "data_size": 63488 00:21:17.271 }, 00:21:17.271 { 00:21:17.271 "name": "BaseBdev2", 00:21:17.271 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:17.271 "is_configured": true, 00:21:17.271 "data_offset": 2048, 00:21:17.271 "data_size": 63488 00:21:17.271 } 00:21:17.271 ] 00:21:17.271 }' 00:21:17.271 06:13:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:17.271 06:13:47 -- common/autotest_common.sh@10 -- # set +x 00:21:17.839 06:13:48 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:17.839 06:13:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:17.839 06:13:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:17.839 06:13:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:17.839 06:13:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:17.839 06:13:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.840 06:13:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.099 "name": "raid_bdev1", 00:21:18.099 "uuid": "5ede544c-ab9c-4d33-b708-d9ede4f6d173", 00:21:18.099 "strip_size_kb": 0, 00:21:18.099 "state": "online", 00:21:18.099 "raid_level": "raid1", 00:21:18.099 "superblock": true, 00:21:18.099 "num_base_bdevs": 2, 00:21:18.099 "num_base_bdevs_discovered": 2, 00:21:18.099 "num_base_bdevs_operational": 2, 00:21:18.099 "base_bdevs_list": [ 00:21:18.099 { 00:21:18.099 "name": "spare", 00:21:18.099 "uuid": "e28e66fe-0bb0-5d28-86b9-3e6ca366cc6f", 00:21:18.099 "is_configured": true, 00:21:18.099 "data_offset": 2048, 00:21:18.099 "data_size": 63488 00:21:18.099 }, 00:21:18.099 { 00:21:18.099 "name": "BaseBdev2", 00:21:18.099 "uuid": "07bfa7d4-54b9-568f-a570-8d5d58e88f3f", 00:21:18.099 "is_configured": true, 00:21:18.099 "data_offset": 2048, 00:21:18.099 "data_size": 63488 00:21:18.099 } 00:21:18.099 ] 00:21:18.099 }' 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.099 06:13:48 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:18.358 06:13:48 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.358 06:13:48 -- bdev/bdev_raid.sh@709 -- # killprocess 123579 00:21:18.358 06:13:48 -- common/autotest_common.sh@926 -- # '[' -z 123579 ']' 00:21:18.358 06:13:48 -- common/autotest_common.sh@930 -- # kill -0 123579 00:21:18.358 06:13:48 -- common/autotest_common.sh@931 -- # uname 00:21:18.358 06:13:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:18.358 06:13:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123579 00:21:18.617 killing process with pid 123579 00:21:18.617 Received shutdown signal, test time was about 60.000000 seconds 00:21:18.617 00:21:18.617 Latency(us) 00:21:18.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.617 =================================================================================================================== 00:21:18.617 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.617 06:13:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:18.617 06:13:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:18.617 06:13:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123579' 00:21:18.617 06:13:49 -- common/autotest_common.sh@945 -- # kill 123579 00:21:18.617 [2024-06-11 06:13:49.011518] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:18.617 06:13:49 -- common/autotest_common.sh@950 -- # wait 123579 00:21:18.617 [2024-06-11 06:13:49.011622] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.617 [2024-06-11 06:13:49.011699] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.617 [2024-06-11 06:13:49.011707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:21:18.876 [2024-06-11 06:13:49.314989] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:20.256 ************************************ 00:21:20.256 END TEST raid_rebuild_test_sb 00:21:20.256 ************************************ 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:20.256 00:21:20.256 real 0m24.718s 00:21:20.256 user 0m34.141s 00:21:20.256 sys 0m5.294s 00:21:20.256 06:13:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.256 06:13:50 -- common/autotest_common.sh@10 -- # set +x 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:21:20.256 06:13:50 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:20.256 06:13:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:20.256 06:13:50 -- common/autotest_common.sh@10 -- # set +x 00:21:20.256 ************************************ 00:21:20.256 START TEST raid_rebuild_test_io 00:21:20.256 ************************************ 00:21:20.256 06:13:50 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@544 -- # raid_pid=124194 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124194 /var/tmp/spdk-raid.sock 00:21:20.256 06:13:50 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:20.256 06:13:50 -- common/autotest_common.sh@819 -- # '[' -z 124194 ']' 00:21:20.256 06:13:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:20.256 06:13:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:20.256 06:13:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:20.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:20.256 06:13:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:20.256 06:13:50 -- common/autotest_common.sh@10 -- # set +x 00:21:20.256 [2024-06-11 06:13:50.888840] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:20.256 [2024-06-11 06:13:50.889120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124194 ] 00:21:20.256 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:20.256 Zero copy mechanism will not be used. 00:21:20.515 [2024-06-11 06:13:51.087667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.775 [2024-06-11 06:13:51.324066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.034 [2024-06-11 06:13:51.544853] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:21.293 06:13:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:21.293 06:13:51 -- common/autotest_common.sh@852 -- # return 0 00:21:21.293 06:13:51 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:21.293 06:13:51 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:21.293 06:13:51 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:21.552 BaseBdev1 00:21:21.552 06:13:52 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:21.552 06:13:52 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:21.552 06:13:52 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:21.811 BaseBdev2 00:21:21.811 06:13:52 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:22.069 spare_malloc 00:21:22.069 06:13:52 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:22.328 spare_delay 00:21:22.328 06:13:52 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:22.588 [2024-06-11 06:13:53.015189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:22.588 [2024-06-11 06:13:53.015303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:22.588 [2024-06-11 06:13:53.015346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:22.588 [2024-06-11 06:13:53.015396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:22.588 [2024-06-11 06:13:53.018116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:22.588 [2024-06-11 06:13:53.018166] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:22.588 spare 00:21:22.588 06:13:53 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:22.846 [2024-06-11 06:13:53.247534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:22.846 [2024-06-11 06:13:53.249795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:22.846 [2024-06-11 06:13:53.249908] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:21:22.847 [2024-06-11 06:13:53.249918] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:22.847 [2024-06-11 06:13:53.250078] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:22.847 [2024-06-11 06:13:53.250473] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:21:22.847 [2024-06-11 06:13:53.250491] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:21:22.847 [2024-06-11 06:13:53.250682] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.847 06:13:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.106 06:13:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:23.106 "name": "raid_bdev1", 00:21:23.106 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:23.106 "strip_size_kb": 0, 00:21:23.106 "state": "online", 00:21:23.106 "raid_level": "raid1", 00:21:23.106 "superblock": false, 00:21:23.106 "num_base_bdevs": 2, 00:21:23.106 "num_base_bdevs_discovered": 2, 00:21:23.106 "num_base_bdevs_operational": 2, 00:21:23.106 "base_bdevs_list": [ 00:21:23.106 { 00:21:23.106 "name": "BaseBdev1", 00:21:23.106 "uuid": "d42d920d-7e91-49ca-8692-4445bb5cfe9d", 00:21:23.106 "is_configured": true, 00:21:23.106 "data_offset": 0, 00:21:23.106 "data_size": 65536 00:21:23.106 }, 00:21:23.106 { 00:21:23.106 "name": "BaseBdev2", 00:21:23.106 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:23.106 "is_configured": true, 00:21:23.106 "data_offset": 0, 00:21:23.106 "data_size": 65536 00:21:23.106 } 00:21:23.106 ] 00:21:23.106 }' 00:21:23.106 06:13:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:23.106 06:13:53 -- common/autotest_common.sh@10 -- # set +x 00:21:23.675 06:13:54 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:23.675 06:13:54 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:23.934 [2024-06-11 06:13:54.356074] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:23.934 06:13:54 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:23.934 06:13:54 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.934 06:13:54 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:24.193 06:13:54 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:24.193 06:13:54 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:24.193 06:13:54 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:24.193 06:13:54 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:24.193 [2024-06-11 06:13:54.717207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:24.193 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:24.193 Zero copy mechanism will not be used. 00:21:24.193 Running I/O for 60 seconds... 00:21:24.193 [2024-06-11 06:13:54.826712] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:24.193 [2024-06-11 06:13:54.832540] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.452 06:13:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.712 06:13:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:24.712 "name": "raid_bdev1", 00:21:24.712 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:24.712 "strip_size_kb": 0, 00:21:24.712 "state": "online", 00:21:24.712 "raid_level": "raid1", 00:21:24.712 "superblock": false, 00:21:24.712 "num_base_bdevs": 2, 00:21:24.712 "num_base_bdevs_discovered": 1, 00:21:24.712 "num_base_bdevs_operational": 1, 00:21:24.712 "base_bdevs_list": [ 00:21:24.712 { 00:21:24.712 "name": null, 00:21:24.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.712 "is_configured": false, 00:21:24.712 "data_offset": 0, 00:21:24.712 "data_size": 65536 00:21:24.712 }, 00:21:24.712 { 00:21:24.712 "name": "BaseBdev2", 00:21:24.712 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:24.712 "is_configured": true, 00:21:24.712 "data_offset": 0, 00:21:24.712 "data_size": 65536 00:21:24.712 } 00:21:24.712 ] 00:21:24.712 }' 00:21:24.712 06:13:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:24.712 06:13:55 -- common/autotest_common.sh@10 -- # set +x 00:21:25.281 06:13:55 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:25.281 [2024-06-11 06:13:55.811135] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:25.281 [2024-06-11 06:13:55.811201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:25.281 06:13:55 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:25.281 [2024-06-11 06:13:55.862952] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:25.281 [2024-06-11 06:13:55.865245] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.540 [2024-06-11 06:13:55.985835] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:25.540 [2024-06-11 06:13:55.986451] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:25.540 [2024-06-11 06:13:56.112081] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:25.540 [2024-06-11 06:13:56.112467] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:26.109 [2024-06-11 06:13:56.459071] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:26.109 [2024-06-11 06:13:56.692179] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.368 06:13:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.627 [2024-06-11 06:13:57.047399] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:26.627 06:13:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.627 "name": "raid_bdev1", 00:21:26.627 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:26.627 "strip_size_kb": 0, 00:21:26.627 "state": "online", 00:21:26.627 "raid_level": "raid1", 00:21:26.627 "superblock": false, 00:21:26.627 "num_base_bdevs": 2, 00:21:26.627 "num_base_bdevs_discovered": 2, 00:21:26.627 "num_base_bdevs_operational": 2, 00:21:26.627 "process": { 00:21:26.627 "type": "rebuild", 00:21:26.627 "target": "spare", 00:21:26.627 "progress": { 00:21:26.627 "blocks": 16384, 00:21:26.627 "percent": 25 00:21:26.627 } 00:21:26.627 }, 00:21:26.627 "base_bdevs_list": [ 00:21:26.627 { 00:21:26.627 "name": "spare", 00:21:26.627 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:26.627 "is_configured": true, 00:21:26.627 "data_offset": 0, 00:21:26.627 "data_size": 65536 00:21:26.627 }, 00:21:26.627 { 00:21:26.627 "name": "BaseBdev2", 00:21:26.627 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:26.627 "is_configured": true, 00:21:26.627 "data_offset": 0, 00:21:26.627 "data_size": 65536 00:21:26.627 } 00:21:26.627 ] 00:21:26.627 }' 00:21:26.627 06:13:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.627 06:13:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:26.627 06:13:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.627 06:13:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:26.627 06:13:57 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:26.886 [2024-06-11 06:13:57.420472] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.145 [2024-06-11 06:13:57.611129] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.145 [2024-06-11 06:13:57.624849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.145 [2024-06-11 06:13:57.664781] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005860 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:27.145 06:13:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:27.146 06:13:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:27.146 06:13:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:27.146 06:13:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.146 06:13:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.405 06:13:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.405 "name": "raid_bdev1", 00:21:27.405 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:27.405 "strip_size_kb": 0, 00:21:27.405 "state": "online", 00:21:27.405 "raid_level": "raid1", 00:21:27.405 "superblock": false, 00:21:27.405 "num_base_bdevs": 2, 00:21:27.405 "num_base_bdevs_discovered": 1, 00:21:27.405 "num_base_bdevs_operational": 1, 00:21:27.405 "base_bdevs_list": [ 00:21:27.405 { 00:21:27.405 "name": null, 00:21:27.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.405 "is_configured": false, 00:21:27.405 "data_offset": 0, 00:21:27.405 "data_size": 65536 00:21:27.405 }, 00:21:27.405 { 00:21:27.405 "name": "BaseBdev2", 00:21:27.405 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:27.405 "is_configured": true, 00:21:27.405 "data_offset": 0, 00:21:27.405 "data_size": 65536 00:21:27.405 } 00:21:27.405 ] 00:21:27.405 }' 00:21:27.405 06:13:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.405 06:13:57 -- common/autotest_common.sh@10 -- # set +x 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.974 06:13:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.233 06:13:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:28.233 "name": "raid_bdev1", 00:21:28.233 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:28.233 "strip_size_kb": 0, 00:21:28.233 "state": "online", 00:21:28.233 "raid_level": "raid1", 00:21:28.233 "superblock": false, 00:21:28.233 "num_base_bdevs": 2, 00:21:28.233 "num_base_bdevs_discovered": 1, 00:21:28.233 "num_base_bdevs_operational": 1, 00:21:28.233 "base_bdevs_list": [ 00:21:28.233 { 00:21:28.233 "name": null, 00:21:28.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.233 "is_configured": false, 00:21:28.233 "data_offset": 0, 00:21:28.233 "data_size": 65536 00:21:28.233 }, 00:21:28.233 { 00:21:28.233 "name": "BaseBdev2", 00:21:28.233 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:28.233 "is_configured": true, 00:21:28.233 "data_offset": 0, 00:21:28.233 "data_size": 65536 00:21:28.233 } 00:21:28.233 ] 00:21:28.233 }' 00:21:28.233 06:13:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:28.233 06:13:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:28.233 06:13:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:28.492 06:13:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:28.492 06:13:58 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:28.492 [2024-06-11 06:13:59.081693] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:28.492 [2024-06-11 06:13:59.081776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:28.492 06:13:59 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:28.751 [2024-06-11 06:13:59.142598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:28.751 [2024-06-11 06:13:59.144889] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:28.751 [2024-06-11 06:13:59.394512] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:28.751 [2024-06-11 06:13:59.394851] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:29.319 [2024-06-11 06:13:59.739233] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:29.319 [2024-06-11 06:13:59.848465] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:29.319 [2024-06-11 06:13:59.848734] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.578 06:14:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.578 [2024-06-11 06:14:00.217201] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:29.836 06:14:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.836 "name": "raid_bdev1", 00:21:29.836 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:29.836 "strip_size_kb": 0, 00:21:29.836 "state": "online", 00:21:29.836 "raid_level": "raid1", 00:21:29.836 "superblock": false, 00:21:29.836 "num_base_bdevs": 2, 00:21:29.836 "num_base_bdevs_discovered": 2, 00:21:29.836 "num_base_bdevs_operational": 2, 00:21:29.836 "process": { 00:21:29.836 "type": "rebuild", 00:21:29.836 "target": "spare", 00:21:29.836 "progress": { 00:21:29.836 "blocks": 16384, 00:21:29.836 "percent": 25 00:21:29.836 } 00:21:29.836 }, 00:21:29.836 "base_bdevs_list": [ 00:21:29.836 { 00:21:29.836 "name": "spare", 00:21:29.836 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:29.836 "is_configured": true, 00:21:29.836 "data_offset": 0, 00:21:29.836 "data_size": 65536 00:21:29.836 }, 00:21:29.836 { 00:21:29.836 "name": "BaseBdev2", 00:21:29.836 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:29.836 "is_configured": true, 00:21:29.836 "data_offset": 0, 00:21:29.836 "data_size": 65536 00:21:29.836 } 00:21:29.836 ] 00:21:29.836 }' 00:21:29.836 06:14:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.837 06:14:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.837 06:14:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@657 -- # local timeout=433 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.095 [2024-06-11 06:14:00.548157] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:30.095 06:14:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.095 "name": "raid_bdev1", 00:21:30.095 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:30.095 "strip_size_kb": 0, 00:21:30.095 "state": "online", 00:21:30.095 "raid_level": "raid1", 00:21:30.095 "superblock": false, 00:21:30.095 "num_base_bdevs": 2, 00:21:30.095 "num_base_bdevs_discovered": 2, 00:21:30.095 "num_base_bdevs_operational": 2, 00:21:30.096 "process": { 00:21:30.096 "type": "rebuild", 00:21:30.096 "target": "spare", 00:21:30.096 "progress": { 00:21:30.096 "blocks": 20480, 00:21:30.096 "percent": 31 00:21:30.096 } 00:21:30.096 }, 00:21:30.096 "base_bdevs_list": [ 00:21:30.096 { 00:21:30.096 "name": "spare", 00:21:30.096 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:30.096 "is_configured": true, 00:21:30.096 "data_offset": 0, 00:21:30.096 "data_size": 65536 00:21:30.096 }, 00:21:30.096 { 00:21:30.096 "name": "BaseBdev2", 00:21:30.096 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:30.096 "is_configured": true, 00:21:30.096 "data_offset": 0, 00:21:30.096 "data_size": 65536 00:21:30.096 } 00:21:30.096 ] 00:21:30.096 }' 00:21:30.096 06:14:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.354 06:14:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:30.355 06:14:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.355 [2024-06-11 06:14:00.764517] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:30.355 06:14:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:30.355 06:14:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:30.355 [2024-06-11 06:14:00.997101] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:30.613 [2024-06-11 06:14:01.199076] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:31.182 [2024-06-11 06:14:01.535620] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:31.182 [2024-06-11 06:14:01.535976] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.182 06:14:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.440 [2024-06-11 06:14:01.873910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:31.440 06:14:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.440 "name": "raid_bdev1", 00:21:31.440 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:31.440 "strip_size_kb": 0, 00:21:31.440 "state": "online", 00:21:31.440 "raid_level": "raid1", 00:21:31.440 "superblock": false, 00:21:31.440 "num_base_bdevs": 2, 00:21:31.440 "num_base_bdevs_discovered": 2, 00:21:31.440 "num_base_bdevs_operational": 2, 00:21:31.440 "process": { 00:21:31.440 "type": "rebuild", 00:21:31.440 "target": "spare", 00:21:31.440 "progress": { 00:21:31.440 "blocks": 38912, 00:21:31.440 "percent": 59 00:21:31.440 } 00:21:31.440 }, 00:21:31.440 "base_bdevs_list": [ 00:21:31.440 { 00:21:31.440 "name": "spare", 00:21:31.440 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:31.440 "is_configured": true, 00:21:31.440 "data_offset": 0, 00:21:31.440 "data_size": 65536 00:21:31.440 }, 00:21:31.440 { 00:21:31.440 "name": "BaseBdev2", 00:21:31.440 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:31.440 "is_configured": true, 00:21:31.440 "data_offset": 0, 00:21:31.440 "data_size": 65536 00:21:31.440 } 00:21:31.440 ] 00:21:31.440 }' 00:21:31.440 06:14:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.699 06:14:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.699 06:14:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.699 06:14:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.699 06:14:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.635 06:14:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.896 06:14:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.896 "name": "raid_bdev1", 00:21:32.896 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:32.896 "strip_size_kb": 0, 00:21:32.896 "state": "online", 00:21:32.896 "raid_level": "raid1", 00:21:32.896 "superblock": false, 00:21:32.896 "num_base_bdevs": 2, 00:21:32.896 "num_base_bdevs_discovered": 2, 00:21:32.896 "num_base_bdevs_operational": 2, 00:21:32.896 "process": { 00:21:32.896 "type": "rebuild", 00:21:32.896 "target": "spare", 00:21:32.896 "progress": { 00:21:32.896 "blocks": 63488, 00:21:32.896 "percent": 96 00:21:32.896 } 00:21:32.896 }, 00:21:32.896 "base_bdevs_list": [ 00:21:32.896 { 00:21:32.896 "name": "spare", 00:21:32.896 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:32.896 "is_configured": true, 00:21:32.896 "data_offset": 0, 00:21:32.896 "data_size": 65536 00:21:32.896 }, 00:21:32.896 { 00:21:32.896 "name": "BaseBdev2", 00:21:32.896 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:32.896 "is_configured": true, 00:21:32.896 "data_offset": 0, 00:21:32.896 "data_size": 65536 00:21:32.896 } 00:21:32.896 ] 00:21:32.896 }' 00:21:32.896 06:14:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.896 [2024-06-11 06:14:03.411252] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:32.896 06:14:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.896 06:14:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.896 06:14:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.896 06:14:03 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:32.896 [2024-06-11 06:14:03.511255] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:32.896 [2024-06-11 06:14:03.513627] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.867 06:14:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.125 06:14:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:34.125 "name": "raid_bdev1", 00:21:34.125 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:34.125 "strip_size_kb": 0, 00:21:34.125 "state": "online", 00:21:34.125 "raid_level": "raid1", 00:21:34.125 "superblock": false, 00:21:34.125 "num_base_bdevs": 2, 00:21:34.125 "num_base_bdevs_discovered": 2, 00:21:34.125 "num_base_bdevs_operational": 2, 00:21:34.125 "base_bdevs_list": [ 00:21:34.125 { 00:21:34.125 "name": "spare", 00:21:34.125 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:34.125 "is_configured": true, 00:21:34.125 "data_offset": 0, 00:21:34.125 "data_size": 65536 00:21:34.125 }, 00:21:34.125 { 00:21:34.125 "name": "BaseBdev2", 00:21:34.125 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:34.125 "is_configured": true, 00:21:34.125 "data_offset": 0, 00:21:34.125 "data_size": 65536 00:21:34.125 } 00:21:34.125 ] 00:21:34.125 }' 00:21:34.125 06:14:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:34.125 06:14:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@660 -- # break 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.384 06:14:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.643 06:14:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:34.643 "name": "raid_bdev1", 00:21:34.643 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:34.643 "strip_size_kb": 0, 00:21:34.643 "state": "online", 00:21:34.643 "raid_level": "raid1", 00:21:34.643 "superblock": false, 00:21:34.643 "num_base_bdevs": 2, 00:21:34.643 "num_base_bdevs_discovered": 2, 00:21:34.643 "num_base_bdevs_operational": 2, 00:21:34.643 "base_bdevs_list": [ 00:21:34.643 { 00:21:34.643 "name": "spare", 00:21:34.643 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:34.643 "is_configured": true, 00:21:34.643 "data_offset": 0, 00:21:34.643 "data_size": 65536 00:21:34.643 }, 00:21:34.643 { 00:21:34.643 "name": "BaseBdev2", 00:21:34.643 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:34.643 "is_configured": true, 00:21:34.643 "data_offset": 0, 00:21:34.643 "data_size": 65536 00:21:34.643 } 00:21:34.643 ] 00:21:34.643 }' 00:21:34.643 06:14:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:34.643 06:14:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:34.643 06:14:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:34.643 06:14:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.644 06:14:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.903 06:14:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.903 "name": "raid_bdev1", 00:21:34.903 "uuid": "5d1b72dc-6f40-49ca-bfff-854367b26777", 00:21:34.903 "strip_size_kb": 0, 00:21:34.903 "state": "online", 00:21:34.903 "raid_level": "raid1", 00:21:34.903 "superblock": false, 00:21:34.903 "num_base_bdevs": 2, 00:21:34.903 "num_base_bdevs_discovered": 2, 00:21:34.903 "num_base_bdevs_operational": 2, 00:21:34.903 "base_bdevs_list": [ 00:21:34.903 { 00:21:34.903 "name": "spare", 00:21:34.903 "uuid": "95231e91-1745-5e11-a2a1-cad6bc7447de", 00:21:34.903 "is_configured": true, 00:21:34.903 "data_offset": 0, 00:21:34.903 "data_size": 65536 00:21:34.903 }, 00:21:34.903 { 00:21:34.903 "name": "BaseBdev2", 00:21:34.903 "uuid": "ed85eae7-7df9-4104-8ec6-e77f866038b2", 00:21:34.903 "is_configured": true, 00:21:34.903 "data_offset": 0, 00:21:34.903 "data_size": 65536 00:21:34.903 } 00:21:34.903 ] 00:21:34.903 }' 00:21:34.903 06:14:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.903 06:14:05 -- common/autotest_common.sh@10 -- # set +x 00:21:35.470 06:14:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:35.729 [2024-06-11 06:14:06.188798] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.729 [2024-06-11 06:14:06.188870] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.729 00:21:35.729 Latency(us) 00:21:35.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.729 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:35.729 raid_bdev1 : 11.52 115.36 346.07 0.00 0.00 12717.36 296.47 115343.36 00:21:35.729 =================================================================================================================== 00:21:35.729 Total : 115.36 346.07 0.00 0.00 12717.36 296.47 115343.36 00:21:35.729 [2024-06-11 06:14:06.261502] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.729 [2024-06-11 06:14:06.261539] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.729 [2024-06-11 06:14:06.261629] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.729 [2024-06-11 06:14:06.261638] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:21:35.729 0 00:21:35.729 06:14:06 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.729 06:14:06 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:35.988 06:14:06 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:35.988 06:14:06 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:35.988 06:14:06 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@12 -- # local i 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.989 06:14:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:36.247 /dev/nbd0 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.247 06:14:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:36.247 06:14:06 -- common/autotest_common.sh@857 -- # local i 00:21:36.247 06:14:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:36.247 06:14:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:36.247 06:14:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:36.247 06:14:06 -- common/autotest_common.sh@861 -- # break 00:21:36.247 06:14:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:36.247 06:14:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:36.247 06:14:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.247 1+0 records in 00:21:36.247 1+0 records out 00:21:36.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325898 s, 12.6 MB/s 00:21:36.247 06:14:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.247 06:14:06 -- common/autotest_common.sh@874 -- # size=4096 00:21:36.247 06:14:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.247 06:14:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:36.247 06:14:06 -- common/autotest_common.sh@877 -- # return 0 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.247 06:14:06 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:36.247 06:14:06 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:36.247 06:14:06 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.247 06:14:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:36.506 /dev/nbd1 00:21:36.506 06:14:07 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:36.506 06:14:07 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:36.506 06:14:07 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:36.506 06:14:07 -- common/autotest_common.sh@857 -- # local i 00:21:36.506 06:14:07 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:36.506 06:14:07 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:36.506 06:14:07 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:36.506 06:14:07 -- common/autotest_common.sh@861 -- # break 00:21:36.506 06:14:07 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:36.506 06:14:07 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:36.506 06:14:07 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.506 1+0 records in 00:21:36.506 1+0 records out 00:21:36.506 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036113 s, 11.3 MB/s 00:21:36.506 06:14:07 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.506 06:14:07 -- common/autotest_common.sh@874 -- # size=4096 00:21:36.506 06:14:07 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.506 06:14:07 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:36.506 06:14:07 -- common/autotest_common.sh@877 -- # return 0 00:21:36.506 06:14:07 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.506 06:14:07 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.506 06:14:07 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:36.765 06:14:07 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:36.765 06:14:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.765 06:14:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:36.765 06:14:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:36.765 06:14:07 -- bdev/nbd_common.sh@51 -- # local i 00:21:36.765 06:14:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:36.765 06:14:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@41 -- # break 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.025 06:14:07 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@51 -- # local i 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.025 06:14:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:37.284 06:14:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.284 06:14:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.284 06:14:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.284 06:14:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.285 06:14:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.285 06:14:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.285 06:14:07 -- bdev/nbd_common.sh@41 -- # break 00:21:37.285 06:14:07 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.285 06:14:07 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:37.285 06:14:07 -- bdev/bdev_raid.sh@709 -- # killprocess 124194 00:21:37.285 06:14:07 -- common/autotest_common.sh@926 -- # '[' -z 124194 ']' 00:21:37.285 06:14:07 -- common/autotest_common.sh@930 -- # kill -0 124194 00:21:37.285 06:14:07 -- common/autotest_common.sh@931 -- # uname 00:21:37.285 06:14:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:37.285 06:14:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124194 00:21:37.285 06:14:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:37.285 06:14:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:37.285 06:14:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124194' 00:21:37.285 killing process with pid 124194 00:21:37.285 06:14:07 -- common/autotest_common.sh@945 -- # kill 124194 00:21:37.285 Received shutdown signal, test time was about 13.009833 seconds 00:21:37.285 00:21:37.285 Latency(us) 00:21:37.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.285 =================================================================================================================== 00:21:37.285 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.285 [2024-06-11 06:14:07.729600] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.285 06:14:07 -- common/autotest_common.sh@950 -- # wait 124194 00:21:37.544 [2024-06-11 06:14:07.973180] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.923 ************************************ 00:21:38.923 END TEST raid_rebuild_test_io 00:21:38.923 ************************************ 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:38.923 00:21:38.923 real 0m18.645s 00:21:38.923 user 0m27.213s 00:21:38.923 sys 0m2.746s 00:21:38.923 06:14:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.923 06:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:38.923 06:14:09 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:38.923 06:14:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:38.923 06:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:38.923 ************************************ 00:21:38.923 START TEST raid_rebuild_test_sb_io 00:21:38.923 ************************************ 00:21:38.923 06:14:09 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@544 -- # raid_pid=124683 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@545 -- # waitforlisten 124683 /var/tmp/spdk-raid.sock 00:21:38.923 06:14:09 -- common/autotest_common.sh@819 -- # '[' -z 124683 ']' 00:21:38.923 06:14:09 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:38.923 06:14:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:38.923 06:14:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:38.923 06:14:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:38.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:38.923 06:14:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:38.923 06:14:09 -- common/autotest_common.sh@10 -- # set +x 00:21:39.183 [2024-06-11 06:14:09.583474] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:21:39.183 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:39.183 Zero copy mechanism will not be used. 00:21:39.183 [2024-06-11 06:14:09.583671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124683 ] 00:21:39.183 [2024-06-11 06:14:09.765814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.442 [2024-06-11 06:14:09.983462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.702 [2024-06-11 06:14:10.213982] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.961 06:14:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:39.961 06:14:10 -- common/autotest_common.sh@852 -- # return 0 00:21:39.961 06:14:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:39.961 06:14:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:39.961 06:14:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:40.221 BaseBdev1_malloc 00:21:40.221 06:14:10 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:40.480 [2024-06-11 06:14:10.987404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:40.480 [2024-06-11 06:14:10.987526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.480 [2024-06-11 06:14:10.987569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:21:40.480 [2024-06-11 06:14:10.987617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.480 [2024-06-11 06:14:10.990333] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.480 [2024-06-11 06:14:10.990382] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:40.480 BaseBdev1 00:21:40.480 06:14:10 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:40.480 06:14:10 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:40.480 06:14:10 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:40.739 BaseBdev2_malloc 00:21:40.739 06:14:11 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:40.998 [2024-06-11 06:14:11.401825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:40.998 [2024-06-11 06:14:11.401912] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.998 [2024-06-11 06:14:11.401974] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:21:40.998 [2024-06-11 06:14:11.402033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.998 [2024-06-11 06:14:11.404461] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.998 [2024-06-11 06:14:11.404508] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:40.998 BaseBdev2 00:21:40.998 06:14:11 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:40.998 spare_malloc 00:21:40.998 06:14:11 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:41.257 spare_delay 00:21:41.257 06:14:11 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:41.517 [2024-06-11 06:14:12.026848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:41.517 [2024-06-11 06:14:12.026953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.517 [2024-06-11 06:14:12.027001] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:41.517 [2024-06-11 06:14:12.027045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.517 [2024-06-11 06:14:12.029714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.517 [2024-06-11 06:14:12.029787] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:41.517 spare 00:21:41.517 06:14:12 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:41.777 [2024-06-11 06:14:12.198939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:41.777 [2024-06-11 06:14:12.201244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.777 [2024-06-11 06:14:12.201452] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:41.777 [2024-06-11 06:14:12.201462] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:41.777 [2024-06-11 06:14:12.201620] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:41.777 [2024-06-11 06:14:12.201993] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:41.777 [2024-06-11 06:14:12.202014] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:41.777 [2024-06-11 06:14:12.202169] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.777 06:14:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.037 06:14:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.037 "name": "raid_bdev1", 00:21:42.037 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:42.037 "strip_size_kb": 0, 00:21:42.037 "state": "online", 00:21:42.037 "raid_level": "raid1", 00:21:42.037 "superblock": true, 00:21:42.037 "num_base_bdevs": 2, 00:21:42.037 "num_base_bdevs_discovered": 2, 00:21:42.037 "num_base_bdevs_operational": 2, 00:21:42.037 "base_bdevs_list": [ 00:21:42.037 { 00:21:42.037 "name": "BaseBdev1", 00:21:42.037 "uuid": "25e530bb-6de6-51d4-bf34-263ea2c58631", 00:21:42.037 "is_configured": true, 00:21:42.037 "data_offset": 2048, 00:21:42.037 "data_size": 63488 00:21:42.037 }, 00:21:42.037 { 00:21:42.037 "name": "BaseBdev2", 00:21:42.037 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:42.037 "is_configured": true, 00:21:42.037 "data_offset": 2048, 00:21:42.037 "data_size": 63488 00:21:42.037 } 00:21:42.037 ] 00:21:42.037 }' 00:21:42.037 06:14:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.037 06:14:12 -- common/autotest_common.sh@10 -- # set +x 00:21:42.605 06:14:12 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:42.605 06:14:12 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:42.605 [2024-06-11 06:14:13.239337] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:42.864 06:14:13 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:42.864 [2024-06-11 06:14:13.508429] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:43.124 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:43.124 Zero copy mechanism will not be used. 00:21:43.124 Running I/O for 60 seconds... 00:21:43.124 [2024-06-11 06:14:13.649574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:43.124 [2024-06-11 06:14:13.655324] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.124 06:14:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.383 06:14:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.383 "name": "raid_bdev1", 00:21:43.383 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:43.383 "strip_size_kb": 0, 00:21:43.383 "state": "online", 00:21:43.383 "raid_level": "raid1", 00:21:43.383 "superblock": true, 00:21:43.383 "num_base_bdevs": 2, 00:21:43.383 "num_base_bdevs_discovered": 1, 00:21:43.383 "num_base_bdevs_operational": 1, 00:21:43.383 "base_bdevs_list": [ 00:21:43.383 { 00:21:43.383 "name": null, 00:21:43.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.383 "is_configured": false, 00:21:43.383 "data_offset": 2048, 00:21:43.383 "data_size": 63488 00:21:43.383 }, 00:21:43.383 { 00:21:43.383 "name": "BaseBdev2", 00:21:43.383 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:43.383 "is_configured": true, 00:21:43.383 "data_offset": 2048, 00:21:43.383 "data_size": 63488 00:21:43.383 } 00:21:43.383 ] 00:21:43.383 }' 00:21:43.383 06:14:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.383 06:14:13 -- common/autotest_common.sh@10 -- # set +x 00:21:43.952 06:14:14 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:44.211 [2024-06-11 06:14:14.629742] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:44.211 [2024-06-11 06:14:14.629833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:44.211 06:14:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:44.211 [2024-06-11 06:14:14.687723] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:44.211 [2024-06-11 06:14:14.690010] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:44.211 [2024-06-11 06:14:14.803398] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:44.211 [2024-06-11 06:14:14.803943] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:44.471 [2024-06-11 06:14:15.038403] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:44.471 [2024-06-11 06:14:15.038742] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:45.039 [2024-06-11 06:14:15.522159] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:45.039 [2024-06-11 06:14:15.522464] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.298 06:14:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.299 [2024-06-11 06:14:15.854123] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:45.299 06:14:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:45.299 "name": "raid_bdev1", 00:21:45.299 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:45.299 "strip_size_kb": 0, 00:21:45.299 "state": "online", 00:21:45.299 "raid_level": "raid1", 00:21:45.299 "superblock": true, 00:21:45.299 "num_base_bdevs": 2, 00:21:45.299 "num_base_bdevs_discovered": 2, 00:21:45.299 "num_base_bdevs_operational": 2, 00:21:45.299 "process": { 00:21:45.299 "type": "rebuild", 00:21:45.299 "target": "spare", 00:21:45.299 "progress": { 00:21:45.299 "blocks": 14336, 00:21:45.299 "percent": 22 00:21:45.299 } 00:21:45.299 }, 00:21:45.299 "base_bdevs_list": [ 00:21:45.299 { 00:21:45.299 "name": "spare", 00:21:45.299 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:45.299 "is_configured": true, 00:21:45.299 "data_offset": 2048, 00:21:45.299 "data_size": 63488 00:21:45.299 }, 00:21:45.299 { 00:21:45.299 "name": "BaseBdev2", 00:21:45.299 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:45.299 "is_configured": true, 00:21:45.299 "data_offset": 2048, 00:21:45.299 "data_size": 63488 00:21:45.299 } 00:21:45.299 ] 00:21:45.299 }' 00:21:45.299 06:14:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:45.558 06:14:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:45.558 06:14:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:45.558 06:14:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:45.558 06:14:16 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:45.558 [2024-06-11 06:14:16.064558] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:45.558 [2024-06-11 06:14:16.064924] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:45.817 [2024-06-11 06:14:16.229112] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.817 [2024-06-11 06:14:16.374811] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:45.817 [2024-06-11 06:14:16.377775] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.817 [2024-06-11 06:14:16.409026] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.817 06:14:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.077 06:14:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.077 "name": "raid_bdev1", 00:21:46.077 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:46.077 "strip_size_kb": 0, 00:21:46.077 "state": "online", 00:21:46.077 "raid_level": "raid1", 00:21:46.077 "superblock": true, 00:21:46.077 "num_base_bdevs": 2, 00:21:46.077 "num_base_bdevs_discovered": 1, 00:21:46.077 "num_base_bdevs_operational": 1, 00:21:46.077 "base_bdevs_list": [ 00:21:46.077 { 00:21:46.077 "name": null, 00:21:46.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.077 "is_configured": false, 00:21:46.077 "data_offset": 2048, 00:21:46.077 "data_size": 63488 00:21:46.077 }, 00:21:46.077 { 00:21:46.077 "name": "BaseBdev2", 00:21:46.077 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:46.077 "is_configured": true, 00:21:46.077 "data_offset": 2048, 00:21:46.077 "data_size": 63488 00:21:46.077 } 00:21:46.077 ] 00:21:46.077 }' 00:21:46.077 06:14:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.077 06:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.646 06:14:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.905 06:14:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:46.905 "name": "raid_bdev1", 00:21:46.905 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:46.905 "strip_size_kb": 0, 00:21:46.905 "state": "online", 00:21:46.905 "raid_level": "raid1", 00:21:46.905 "superblock": true, 00:21:46.905 "num_base_bdevs": 2, 00:21:46.905 "num_base_bdevs_discovered": 1, 00:21:46.905 "num_base_bdevs_operational": 1, 00:21:46.905 "base_bdevs_list": [ 00:21:46.905 { 00:21:46.905 "name": null, 00:21:46.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.905 "is_configured": false, 00:21:46.905 "data_offset": 2048, 00:21:46.905 "data_size": 63488 00:21:46.905 }, 00:21:46.905 { 00:21:46.905 "name": "BaseBdev2", 00:21:46.905 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:46.905 "is_configured": true, 00:21:46.905 "data_offset": 2048, 00:21:46.905 "data_size": 63488 00:21:46.905 } 00:21:46.905 ] 00:21:46.905 }' 00:21:46.905 06:14:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:46.905 06:14:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:46.905 06:14:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:46.905 06:14:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:46.905 06:14:17 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:47.165 [2024-06-11 06:14:17.707870] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:47.165 [2024-06-11 06:14:17.707932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:47.165 06:14:17 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:47.165 [2024-06-11 06:14:17.752513] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:47.165 [2024-06-11 06:14:17.754863] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:47.424 [2024-06-11 06:14:17.881431] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:47.424 [2024-06-11 06:14:17.882027] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:47.683 [2024-06-11 06:14:18.096669] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:47.683 [2024-06-11 06:14:18.096972] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:47.942 [2024-06-11 06:14:18.435802] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:47.942 [2024-06-11 06:14:18.436404] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:48.201 [2024-06-11 06:14:18.645724] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:48.201 [2024-06-11 06:14:18.646035] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.201 06:14:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.461 [2024-06-11 06:14:18.964206] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:48.461 06:14:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.461 "name": "raid_bdev1", 00:21:48.461 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:48.461 "strip_size_kb": 0, 00:21:48.461 "state": "online", 00:21:48.461 "raid_level": "raid1", 00:21:48.461 "superblock": true, 00:21:48.461 "num_base_bdevs": 2, 00:21:48.461 "num_base_bdevs_discovered": 2, 00:21:48.461 "num_base_bdevs_operational": 2, 00:21:48.461 "process": { 00:21:48.461 "type": "rebuild", 00:21:48.461 "target": "spare", 00:21:48.461 "progress": { 00:21:48.461 "blocks": 14336, 00:21:48.461 "percent": 22 00:21:48.461 } 00:21:48.461 }, 00:21:48.461 "base_bdevs_list": [ 00:21:48.461 { 00:21:48.461 "name": "spare", 00:21:48.461 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:48.461 "is_configured": true, 00:21:48.461 "data_offset": 2048, 00:21:48.461 "data_size": 63488 00:21:48.461 }, 00:21:48.461 { 00:21:48.461 "name": "BaseBdev2", 00:21:48.461 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:48.461 "is_configured": true, 00:21:48.461 "data_offset": 2048, 00:21:48.461 "data_size": 63488 00:21:48.461 } 00:21:48.461 ] 00:21:48.461 }' 00:21:48.461 06:14:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:48.461 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@657 -- # local timeout=452 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.461 06:14:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.721 06:14:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.721 "name": "raid_bdev1", 00:21:48.721 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:48.721 "strip_size_kb": 0, 00:21:48.721 "state": "online", 00:21:48.721 "raid_level": "raid1", 00:21:48.721 "superblock": true, 00:21:48.721 "num_base_bdevs": 2, 00:21:48.721 "num_base_bdevs_discovered": 2, 00:21:48.721 "num_base_bdevs_operational": 2, 00:21:48.721 "process": { 00:21:48.721 "type": "rebuild", 00:21:48.721 "target": "spare", 00:21:48.721 "progress": { 00:21:48.721 "blocks": 20480, 00:21:48.721 "percent": 32 00:21:48.721 } 00:21:48.721 }, 00:21:48.721 "base_bdevs_list": [ 00:21:48.721 { 00:21:48.721 "name": "spare", 00:21:48.721 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:48.721 "is_configured": true, 00:21:48.721 "data_offset": 2048, 00:21:48.721 "data_size": 63488 00:21:48.721 }, 00:21:48.721 { 00:21:48.721 "name": "BaseBdev2", 00:21:48.721 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:48.721 "is_configured": true, 00:21:48.721 "data_offset": 2048, 00:21:48.721 "data_size": 63488 00:21:48.721 } 00:21:48.721 ] 00:21:48.721 }' 00:21:48.721 06:14:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.980 06:14:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.980 06:14:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.980 06:14:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.980 06:14:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:48.980 [2024-06-11 06:14:19.431311] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:49.240 [2024-06-11 06:14:19.648733] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:49.240 [2024-06-11 06:14:19.649395] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:49.499 [2024-06-11 06:14:20.085450] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:49.758 [2024-06-11 06:14:20.307875] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.017 06:14:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.017 [2024-06-11 06:14:20.526389] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:50.277 06:14:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.277 "name": "raid_bdev1", 00:21:50.277 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:50.277 "strip_size_kb": 0, 00:21:50.277 "state": "online", 00:21:50.277 "raid_level": "raid1", 00:21:50.277 "superblock": true, 00:21:50.277 "num_base_bdevs": 2, 00:21:50.277 "num_base_bdevs_discovered": 2, 00:21:50.277 "num_base_bdevs_operational": 2, 00:21:50.277 "process": { 00:21:50.277 "type": "rebuild", 00:21:50.277 "target": "spare", 00:21:50.277 "progress": { 00:21:50.277 "blocks": 38912, 00:21:50.277 "percent": 61 00:21:50.277 } 00:21:50.277 }, 00:21:50.277 "base_bdevs_list": [ 00:21:50.277 { 00:21:50.277 "name": "spare", 00:21:50.277 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:50.277 "is_configured": true, 00:21:50.277 "data_offset": 2048, 00:21:50.277 "data_size": 63488 00:21:50.277 }, 00:21:50.277 { 00:21:50.277 "name": "BaseBdev2", 00:21:50.277 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:50.277 "is_configured": true, 00:21:50.277 "data_offset": 2048, 00:21:50.277 "data_size": 63488 00:21:50.277 } 00:21:50.277 ] 00:21:50.277 }' 00:21:50.277 06:14:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.277 06:14:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.277 06:14:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.277 06:14:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.277 06:14:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.236 06:14:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.495 [2024-06-11 06:14:21.941132] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:51.495 06:14:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:51.495 "name": "raid_bdev1", 00:21:51.495 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:51.495 "strip_size_kb": 0, 00:21:51.495 "state": "online", 00:21:51.495 "raid_level": "raid1", 00:21:51.495 "superblock": true, 00:21:51.495 "num_base_bdevs": 2, 00:21:51.495 "num_base_bdevs_discovered": 2, 00:21:51.495 "num_base_bdevs_operational": 2, 00:21:51.495 "process": { 00:21:51.495 "type": "rebuild", 00:21:51.495 "target": "spare", 00:21:51.495 "progress": { 00:21:51.495 "blocks": 63488, 00:21:51.495 "percent": 100 00:21:51.495 } 00:21:51.495 }, 00:21:51.495 "base_bdevs_list": [ 00:21:51.495 { 00:21:51.495 "name": "spare", 00:21:51.495 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:51.495 "is_configured": true, 00:21:51.495 "data_offset": 2048, 00:21:51.495 "data_size": 63488 00:21:51.495 }, 00:21:51.495 { 00:21:51.495 "name": "BaseBdev2", 00:21:51.495 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:51.495 "is_configured": true, 00:21:51.495 "data_offset": 2048, 00:21:51.495 "data_size": 63488 00:21:51.495 } 00:21:51.495 ] 00:21:51.495 }' 00:21:51.495 06:14:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:51.495 06:14:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:51.495 06:14:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:51.495 [2024-06-11 06:14:22.041190] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:51.495 [2024-06-11 06:14:22.043754] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:51.496 06:14:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:51.496 06:14:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.874 "name": "raid_bdev1", 00:21:52.874 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:52.874 "strip_size_kb": 0, 00:21:52.874 "state": "online", 00:21:52.874 "raid_level": "raid1", 00:21:52.874 "superblock": true, 00:21:52.874 "num_base_bdevs": 2, 00:21:52.874 "num_base_bdevs_discovered": 2, 00:21:52.874 "num_base_bdevs_operational": 2, 00:21:52.874 "base_bdevs_list": [ 00:21:52.874 { 00:21:52.874 "name": "spare", 00:21:52.874 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:52.874 "is_configured": true, 00:21:52.874 "data_offset": 2048, 00:21:52.874 "data_size": 63488 00:21:52.874 }, 00:21:52.874 { 00:21:52.874 "name": "BaseBdev2", 00:21:52.874 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:52.874 "is_configured": true, 00:21:52.874 "data_offset": 2048, 00:21:52.874 "data_size": 63488 00:21:52.874 } 00:21:52.874 ] 00:21:52.874 }' 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@660 -- # break 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.874 06:14:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.133 06:14:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.133 "name": "raid_bdev1", 00:21:53.133 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:53.133 "strip_size_kb": 0, 00:21:53.133 "state": "online", 00:21:53.133 "raid_level": "raid1", 00:21:53.133 "superblock": true, 00:21:53.133 "num_base_bdevs": 2, 00:21:53.133 "num_base_bdevs_discovered": 2, 00:21:53.133 "num_base_bdevs_operational": 2, 00:21:53.133 "base_bdevs_list": [ 00:21:53.133 { 00:21:53.133 "name": "spare", 00:21:53.134 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:53.134 "is_configured": true, 00:21:53.134 "data_offset": 2048, 00:21:53.134 "data_size": 63488 00:21:53.134 }, 00:21:53.134 { 00:21:53.134 "name": "BaseBdev2", 00:21:53.134 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:53.134 "is_configured": true, 00:21:53.134 "data_offset": 2048, 00:21:53.134 "data_size": 63488 00:21:53.134 } 00:21:53.134 ] 00:21:53.134 }' 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.134 06:14:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.393 06:14:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:53.393 "name": "raid_bdev1", 00:21:53.393 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:53.393 "strip_size_kb": 0, 00:21:53.393 "state": "online", 00:21:53.393 "raid_level": "raid1", 00:21:53.393 "superblock": true, 00:21:53.393 "num_base_bdevs": 2, 00:21:53.393 "num_base_bdevs_discovered": 2, 00:21:53.393 "num_base_bdevs_operational": 2, 00:21:53.393 "base_bdevs_list": [ 00:21:53.393 { 00:21:53.393 "name": "spare", 00:21:53.393 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:53.393 "is_configured": true, 00:21:53.393 "data_offset": 2048, 00:21:53.393 "data_size": 63488 00:21:53.393 }, 00:21:53.393 { 00:21:53.393 "name": "BaseBdev2", 00:21:53.393 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:53.393 "is_configured": true, 00:21:53.393 "data_offset": 2048, 00:21:53.393 "data_size": 63488 00:21:53.393 } 00:21:53.393 ] 00:21:53.393 }' 00:21:53.393 06:14:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:53.393 06:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:53.961 06:14:24 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:54.220 [2024-06-11 06:14:24.763754] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.220 [2024-06-11 06:14:24.763799] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:54.479 00:21:54.479 Latency(us) 00:21:54.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.479 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:54.479 raid_bdev1 : 11.36 109.86 329.58 0.00 0.00 13284.53 296.47 120835.90 00:21:54.479 =================================================================================================================== 00:21:54.479 Total : 109.86 329.58 0.00 0.00 13284.53 296.47 120835.90 00:21:54.479 [2024-06-11 06:14:24.892186] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.479 [2024-06-11 06:14:24.892237] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:54.479 [2024-06-11 06:14:24.892322] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:54.479 [2024-06-11 06:14:24.892332] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:54.479 0 00:21:54.479 06:14:24 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:54.479 06:14:24 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.739 06:14:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:54.739 06:14:25 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:54.739 06:14:25 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.739 06:14:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:54.739 /dev/nbd0 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:54.999 06:14:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:54.999 06:14:25 -- common/autotest_common.sh@857 -- # local i 00:21:54.999 06:14:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:54.999 06:14:25 -- common/autotest_common.sh@861 -- # break 00:21:54.999 06:14:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.999 1+0 records in 00:21:54.999 1+0 records out 00:21:54.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338844 s, 12.1 MB/s 00:21:54.999 06:14:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:54.999 06:14:25 -- common/autotest_common.sh@874 -- # size=4096 00:21:54.999 06:14:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:54.999 06:14:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:54.999 06:14:25 -- common/autotest_common.sh@877 -- # return 0 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.999 06:14:25 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:54.999 06:14:25 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:54.999 06:14:25 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:54.999 /dev/nbd1 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:54.999 06:14:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:54.999 06:14:25 -- common/autotest_common.sh@857 -- # local i 00:21:54.999 06:14:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:54.999 06:14:25 -- common/autotest_common.sh@861 -- # break 00:21:54.999 06:14:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:54.999 06:14:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:54.999 1+0 records in 00:21:54.999 1+0 records out 00:21:54.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362765 s, 11.3 MB/s 00:21:54.999 06:14:25 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:54.999 06:14:25 -- common/autotest_common.sh@874 -- # size=4096 00:21:54.999 06:14:25 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:54.999 06:14:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:54.999 06:14:25 -- common/autotest_common.sh@877 -- # return 0 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.999 06:14:25 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:54.999 06:14:25 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:55.259 06:14:25 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:55.259 06:14:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.259 06:14:25 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:55.259 06:14:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:55.259 06:14:25 -- bdev/nbd_common.sh@51 -- # local i 00:21:55.259 06:14:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.259 06:14:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@41 -- # break 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.518 06:14:26 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@51 -- # local i 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:55.518 06:14:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@41 -- # break 00:21:55.777 06:14:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.777 06:14:26 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:55.777 06:14:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:55.777 06:14:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:55.777 06:14:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:56.036 06:14:26 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:56.295 [2024-06-11 06:14:26.767539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:56.295 [2024-06-11 06:14:26.767659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.295 [2024-06-11 06:14:26.767697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:56.295 [2024-06-11 06:14:26.767726] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.295 [2024-06-11 06:14:26.770437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.295 [2024-06-11 06:14:26.770507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:56.295 [2024-06-11 06:14:26.770653] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:56.295 [2024-06-11 06:14:26.770724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.295 BaseBdev1 00:21:56.295 06:14:26 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:56.295 06:14:26 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:56.295 06:14:26 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:56.554 06:14:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:56.814 [2024-06-11 06:14:27.247736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:56.814 [2024-06-11 06:14:27.247837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.814 [2024-06-11 06:14:27.247884] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:56.814 [2024-06-11 06:14:27.247916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:56.814 [2024-06-11 06:14:27.248445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:56.814 [2024-06-11 06:14:27.248503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:56.814 [2024-06-11 06:14:27.248633] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:56.814 [2024-06-11 06:14:27.248647] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:56.814 [2024-06-11 06:14:27.248654] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.814 [2024-06-11 06:14:27.248675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:21:56.814 [2024-06-11 06:14:27.248774] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.814 BaseBdev2 00:21:56.814 06:14:27 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:56.814 06:14:27 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:57.073 [2024-06-11 06:14:27.667848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:57.073 [2024-06-11 06:14:27.667953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:57.073 [2024-06-11 06:14:27.667996] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:57.073 [2024-06-11 06:14:27.668019] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.073 [2024-06-11 06:14:27.668589] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.073 [2024-06-11 06:14:27.668642] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:57.073 [2024-06-11 06:14:27.668784] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:57.073 [2024-06-11 06:14:27.668824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:57.073 spare 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.073 06:14:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.332 [2024-06-11 06:14:27.768933] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:21:57.332 [2024-06-11 06:14:27.768960] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:57.332 [2024-06-11 06:14:27.769157] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:21:57.332 [2024-06-11 06:14:27.769575] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:21:57.332 [2024-06-11 06:14:27.769596] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:21:57.332 [2024-06-11 06:14:27.769753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.332 06:14:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.332 "name": "raid_bdev1", 00:21:57.332 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:57.332 "strip_size_kb": 0, 00:21:57.332 "state": "online", 00:21:57.332 "raid_level": "raid1", 00:21:57.332 "superblock": true, 00:21:57.332 "num_base_bdevs": 2, 00:21:57.332 "num_base_bdevs_discovered": 2, 00:21:57.332 "num_base_bdevs_operational": 2, 00:21:57.332 "base_bdevs_list": [ 00:21:57.332 { 00:21:57.332 "name": "spare", 00:21:57.332 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:57.332 "is_configured": true, 00:21:57.332 "data_offset": 2048, 00:21:57.332 "data_size": 63488 00:21:57.332 }, 00:21:57.332 { 00:21:57.332 "name": "BaseBdev2", 00:21:57.332 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:57.332 "is_configured": true, 00:21:57.332 "data_offset": 2048, 00:21:57.332 "data_size": 63488 00:21:57.332 } 00:21:57.332 ] 00:21:57.332 }' 00:21:57.332 06:14:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.332 06:14:27 -- common/autotest_common.sh@10 -- # set +x 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.900 06:14:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:58.159 "name": "raid_bdev1", 00:21:58.159 "uuid": "f21632ce-3471-44db-864e-6dd605a84f76", 00:21:58.159 "strip_size_kb": 0, 00:21:58.159 "state": "online", 00:21:58.159 "raid_level": "raid1", 00:21:58.159 "superblock": true, 00:21:58.159 "num_base_bdevs": 2, 00:21:58.159 "num_base_bdevs_discovered": 2, 00:21:58.159 "num_base_bdevs_operational": 2, 00:21:58.159 "base_bdevs_list": [ 00:21:58.159 { 00:21:58.159 "name": "spare", 00:21:58.159 "uuid": "88101150-d8e7-5b75-8de2-2666092f4bb4", 00:21:58.159 "is_configured": true, 00:21:58.159 "data_offset": 2048, 00:21:58.159 "data_size": 63488 00:21:58.159 }, 00:21:58.159 { 00:21:58.159 "name": "BaseBdev2", 00:21:58.159 "uuid": "c867f0ea-acaa-5db0-a258-72b444962707", 00:21:58.159 "is_configured": true, 00:21:58.159 "data_offset": 2048, 00:21:58.159 "data_size": 63488 00:21:58.159 } 00:21:58.159 ] 00:21:58.159 }' 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.159 06:14:28 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:58.418 06:14:28 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:58.418 06:14:28 -- bdev/bdev_raid.sh@709 -- # killprocess 124683 00:21:58.418 06:14:28 -- common/autotest_common.sh@926 -- # '[' -z 124683 ']' 00:21:58.418 06:14:28 -- common/autotest_common.sh@930 -- # kill -0 124683 00:21:58.418 06:14:28 -- common/autotest_common.sh@931 -- # uname 00:21:58.418 06:14:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:58.418 06:14:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124683 00:21:58.418 06:14:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:58.418 06:14:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:58.418 06:14:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124683' 00:21:58.418 killing process with pid 124683 00:21:58.418 Received shutdown signal, test time was about 15.464317 seconds 00:21:58.418 00:21:58.418 Latency(us) 00:21:58.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.418 =================================================================================================================== 00:21:58.418 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:58.418 06:14:28 -- common/autotest_common.sh@945 -- # kill 124683 00:21:58.418 06:14:28 -- common/autotest_common.sh@950 -- # wait 124683 00:21:58.418 [2024-06-11 06:14:28.975373] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.418 [2024-06-11 06:14:28.975482] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:58.418 [2024-06-11 06:14:28.975567] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:58.418 [2024-06-11 06:14:28.975581] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:58.677 [2024-06-11 06:14:29.216912] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:00.054 ************************************ 00:22:00.054 END TEST raid_rebuild_test_sb_io 00:22:00.054 ************************************ 00:22:00.054 06:14:30 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:00.054 00:22:00.054 real 0m21.170s 00:22:00.054 user 0m31.880s 00:22:00.054 sys 0m3.146s 00:22:00.054 06:14:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.054 06:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:22:00.313 06:14:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:00.313 06:14:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:00.313 06:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.313 ************************************ 00:22:00.313 START TEST raid_rebuild_test 00:22:00.313 ************************************ 00:22:00.313 06:14:30 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@544 -- # raid_pid=125251 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125251 /var/tmp/spdk-raid.sock 00:22:00.313 06:14:30 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:00.313 06:14:30 -- common/autotest_common.sh@819 -- # '[' -z 125251 ']' 00:22:00.313 06:14:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:00.313 06:14:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:00.313 06:14:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:00.313 06:14:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.313 06:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.313 [2024-06-11 06:14:30.798317] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:00.313 [2024-06-11 06:14:30.798491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125251 ] 00:22:00.313 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:00.313 Zero copy mechanism will not be used. 00:22:00.572 [2024-06-11 06:14:30.961997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.572 [2024-06-11 06:14:31.192242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.831 [2024-06-11 06:14:31.416227] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:01.398 06:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.398 06:14:31 -- common/autotest_common.sh@852 -- # return 0 00:22:01.398 06:14:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.398 06:14:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.398 06:14:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:01.398 BaseBdev1 00:22:01.398 06:14:31 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.398 06:14:31 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.398 06:14:31 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:01.656 BaseBdev2 00:22:01.656 06:14:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.656 06:14:32 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.656 06:14:32 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:01.914 BaseBdev3 00:22:01.914 06:14:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.914 06:14:32 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:01.914 06:14:32 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:02.173 BaseBdev4 00:22:02.173 06:14:32 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:02.431 spare_malloc 00:22:02.431 06:14:33 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:02.691 spare_delay 00:22:02.691 06:14:33 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:02.950 [2024-06-11 06:14:33.369218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:02.950 [2024-06-11 06:14:33.369336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.950 [2024-06-11 06:14:33.369375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:02.950 [2024-06-11 06:14:33.369426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.950 [2024-06-11 06:14:33.372156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.950 [2024-06-11 06:14:33.372220] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:02.950 spare 00:22:02.950 06:14:33 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:03.209 [2024-06-11 06:14:33.605293] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.209 [2024-06-11 06:14:33.607583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.209 [2024-06-11 06:14:33.607631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:03.209 [2024-06-11 06:14:33.607660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:03.209 [2024-06-11 06:14:33.607743] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:03.209 [2024-06-11 06:14:33.607752] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:03.209 [2024-06-11 06:14:33.607917] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:03.209 [2024-06-11 06:14:33.608296] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:03.209 [2024-06-11 06:14:33.608316] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:03.209 [2024-06-11 06:14:33.608474] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.209 06:14:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:03.209 "name": "raid_bdev1", 00:22:03.209 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:03.209 "strip_size_kb": 0, 00:22:03.209 "state": "online", 00:22:03.209 "raid_level": "raid1", 00:22:03.209 "superblock": false, 00:22:03.209 "num_base_bdevs": 4, 00:22:03.209 "num_base_bdevs_discovered": 4, 00:22:03.209 "num_base_bdevs_operational": 4, 00:22:03.209 "base_bdevs_list": [ 00:22:03.209 { 00:22:03.209 "name": "BaseBdev1", 00:22:03.209 "uuid": "9e4961f0-736f-485c-9c58-7fb77fa63fb3", 00:22:03.209 "is_configured": true, 00:22:03.209 "data_offset": 0, 00:22:03.209 "data_size": 65536 00:22:03.209 }, 00:22:03.209 { 00:22:03.209 "name": "BaseBdev2", 00:22:03.209 "uuid": "2663e1ce-74ac-4393-9f48-47e33296814b", 00:22:03.209 "is_configured": true, 00:22:03.209 "data_offset": 0, 00:22:03.209 "data_size": 65536 00:22:03.209 }, 00:22:03.209 { 00:22:03.209 "name": "BaseBdev3", 00:22:03.209 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:03.209 "is_configured": true, 00:22:03.209 "data_offset": 0, 00:22:03.209 "data_size": 65536 00:22:03.209 }, 00:22:03.209 { 00:22:03.209 "name": "BaseBdev4", 00:22:03.209 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:03.209 "is_configured": true, 00:22:03.209 "data_offset": 0, 00:22:03.209 "data_size": 65536 00:22:03.209 } 00:22:03.210 ] 00:22:03.210 }' 00:22:03.210 06:14:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:03.210 06:14:33 -- common/autotest_common.sh@10 -- # set +x 00:22:03.778 06:14:34 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:03.778 06:14:34 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:04.037 [2024-06-11 06:14:34.573689] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.037 06:14:34 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:04.037 06:14:34 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:04.037 06:14:34 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.296 06:14:34 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:04.296 06:14:34 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:04.296 06:14:34 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:04.296 06:14:34 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@12 -- # local i 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.296 06:14:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:04.555 [2024-06-11 06:14:34.993534] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:04.555 /dev/nbd0 00:22:04.555 06:14:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.555 06:14:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.555 06:14:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:04.555 06:14:35 -- common/autotest_common.sh@857 -- # local i 00:22:04.555 06:14:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:04.555 06:14:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:04.555 06:14:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:04.555 06:14:35 -- common/autotest_common.sh@861 -- # break 00:22:04.555 06:14:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:04.555 06:14:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:04.555 06:14:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.555 1+0 records in 00:22:04.555 1+0 records out 00:22:04.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336484 s, 12.2 MB/s 00:22:04.555 06:14:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.555 06:14:35 -- common/autotest_common.sh@874 -- # size=4096 00:22:04.555 06:14:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.555 06:14:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:04.555 06:14:35 -- common/autotest_common.sh@877 -- # return 0 00:22:04.555 06:14:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.555 06:14:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.555 06:14:35 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:04.555 06:14:35 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:04.555 06:14:35 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:22:09.838 65536+0 records in 00:22:09.838 65536+0 records out 00:22:09.838 33554432 bytes (34 MB, 32 MiB) copied, 4.50314 s, 7.5 MB/s 00:22:09.838 06:14:39 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@51 -- # local i 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:09.838 [2024-06-11 06:14:39.822880] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@41 -- # break 00:22:09.838 06:14:39 -- bdev/nbd_common.sh@45 -- # return 0 00:22:09.838 06:14:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:09.838 [2024-06-11 06:14:40.058599] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.838 06:14:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.839 "name": "raid_bdev1", 00:22:09.839 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:09.839 "strip_size_kb": 0, 00:22:09.839 "state": "online", 00:22:09.839 "raid_level": "raid1", 00:22:09.839 "superblock": false, 00:22:09.839 "num_base_bdevs": 4, 00:22:09.839 "num_base_bdevs_discovered": 3, 00:22:09.839 "num_base_bdevs_operational": 3, 00:22:09.839 "base_bdevs_list": [ 00:22:09.839 { 00:22:09.839 "name": null, 00:22:09.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.839 "is_configured": false, 00:22:09.839 "data_offset": 0, 00:22:09.839 "data_size": 65536 00:22:09.839 }, 00:22:09.839 { 00:22:09.839 "name": "BaseBdev2", 00:22:09.839 "uuid": "2663e1ce-74ac-4393-9f48-47e33296814b", 00:22:09.839 "is_configured": true, 00:22:09.839 "data_offset": 0, 00:22:09.839 "data_size": 65536 00:22:09.839 }, 00:22:09.839 { 00:22:09.839 "name": "BaseBdev3", 00:22:09.839 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:09.839 "is_configured": true, 00:22:09.839 "data_offset": 0, 00:22:09.839 "data_size": 65536 00:22:09.839 }, 00:22:09.839 { 00:22:09.839 "name": "BaseBdev4", 00:22:09.839 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:09.839 "is_configured": true, 00:22:09.839 "data_offset": 0, 00:22:09.839 "data_size": 65536 00:22:09.839 } 00:22:09.839 ] 00:22:09.839 }' 00:22:09.839 06:14:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.839 06:14:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.411 06:14:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:10.670 [2024-06-11 06:14:41.202782] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:10.670 [2024-06-11 06:14:41.202836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:10.670 [2024-06-11 06:14:41.217198] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:22:10.670 [2024-06-11 06:14:41.219497] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:10.670 06:14:41 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.608 06:14:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.878 06:14:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:11.878 "name": "raid_bdev1", 00:22:11.878 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:11.878 "strip_size_kb": 0, 00:22:11.878 "state": "online", 00:22:11.878 "raid_level": "raid1", 00:22:11.878 "superblock": false, 00:22:11.878 "num_base_bdevs": 4, 00:22:11.878 "num_base_bdevs_discovered": 4, 00:22:11.879 "num_base_bdevs_operational": 4, 00:22:11.879 "process": { 00:22:11.879 "type": "rebuild", 00:22:11.879 "target": "spare", 00:22:11.879 "progress": { 00:22:11.879 "blocks": 24576, 00:22:11.879 "percent": 37 00:22:11.879 } 00:22:11.879 }, 00:22:11.879 "base_bdevs_list": [ 00:22:11.879 { 00:22:11.879 "name": "spare", 00:22:11.879 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:11.879 "is_configured": true, 00:22:11.879 "data_offset": 0, 00:22:11.879 "data_size": 65536 00:22:11.879 }, 00:22:11.879 { 00:22:11.879 "name": "BaseBdev2", 00:22:11.879 "uuid": "2663e1ce-74ac-4393-9f48-47e33296814b", 00:22:11.879 "is_configured": true, 00:22:11.879 "data_offset": 0, 00:22:11.879 "data_size": 65536 00:22:11.879 }, 00:22:11.879 { 00:22:11.879 "name": "BaseBdev3", 00:22:11.879 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:11.879 "is_configured": true, 00:22:11.879 "data_offset": 0, 00:22:11.879 "data_size": 65536 00:22:11.879 }, 00:22:11.879 { 00:22:11.879 "name": "BaseBdev4", 00:22:11.879 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:11.879 "is_configured": true, 00:22:11.879 "data_offset": 0, 00:22:11.879 "data_size": 65536 00:22:11.879 } 00:22:11.879 ] 00:22:11.879 }' 00:22:11.879 06:14:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:12.180 [2024-06-11 06:14:42.725695] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:12.180 [2024-06-11 06:14:42.730303] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:12.180 [2024-06-11 06:14:42.730434] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.180 06:14:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.439 06:14:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.439 "name": "raid_bdev1", 00:22:12.439 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:12.439 "strip_size_kb": 0, 00:22:12.439 "state": "online", 00:22:12.439 "raid_level": "raid1", 00:22:12.439 "superblock": false, 00:22:12.439 "num_base_bdevs": 4, 00:22:12.439 "num_base_bdevs_discovered": 3, 00:22:12.439 "num_base_bdevs_operational": 3, 00:22:12.439 "base_bdevs_list": [ 00:22:12.439 { 00:22:12.439 "name": null, 00:22:12.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.439 "is_configured": false, 00:22:12.439 "data_offset": 0, 00:22:12.439 "data_size": 65536 00:22:12.439 }, 00:22:12.439 { 00:22:12.439 "name": "BaseBdev2", 00:22:12.439 "uuid": "2663e1ce-74ac-4393-9f48-47e33296814b", 00:22:12.439 "is_configured": true, 00:22:12.439 "data_offset": 0, 00:22:12.439 "data_size": 65536 00:22:12.439 }, 00:22:12.439 { 00:22:12.439 "name": "BaseBdev3", 00:22:12.439 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:12.439 "is_configured": true, 00:22:12.439 "data_offset": 0, 00:22:12.439 "data_size": 65536 00:22:12.439 }, 00:22:12.439 { 00:22:12.439 "name": "BaseBdev4", 00:22:12.439 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:12.439 "is_configured": true, 00:22:12.439 "data_offset": 0, 00:22:12.439 "data_size": 65536 00:22:12.439 } 00:22:12.439 ] 00:22:12.439 }' 00:22:12.439 06:14:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.439 06:14:42 -- common/autotest_common.sh@10 -- # set +x 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.007 06:14:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.266 06:14:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.266 "name": "raid_bdev1", 00:22:13.266 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:13.266 "strip_size_kb": 0, 00:22:13.266 "state": "online", 00:22:13.266 "raid_level": "raid1", 00:22:13.266 "superblock": false, 00:22:13.266 "num_base_bdevs": 4, 00:22:13.266 "num_base_bdevs_discovered": 3, 00:22:13.266 "num_base_bdevs_operational": 3, 00:22:13.266 "base_bdevs_list": [ 00:22:13.266 { 00:22:13.266 "name": null, 00:22:13.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.266 "is_configured": false, 00:22:13.266 "data_offset": 0, 00:22:13.266 "data_size": 65536 00:22:13.266 }, 00:22:13.266 { 00:22:13.266 "name": "BaseBdev2", 00:22:13.266 "uuid": "2663e1ce-74ac-4393-9f48-47e33296814b", 00:22:13.266 "is_configured": true, 00:22:13.266 "data_offset": 0, 00:22:13.266 "data_size": 65536 00:22:13.266 }, 00:22:13.266 { 00:22:13.266 "name": "BaseBdev3", 00:22:13.266 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:13.266 "is_configured": true, 00:22:13.266 "data_offset": 0, 00:22:13.266 "data_size": 65536 00:22:13.266 }, 00:22:13.266 { 00:22:13.266 "name": "BaseBdev4", 00:22:13.266 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:13.266 "is_configured": true, 00:22:13.266 "data_offset": 0, 00:22:13.266 "data_size": 65536 00:22:13.266 } 00:22:13.266 ] 00:22:13.266 }' 00:22:13.266 06:14:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.266 06:14:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:13.266 06:14:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.266 06:14:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:13.266 06:14:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:13.524 [2024-06-11 06:14:44.037372] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:13.524 [2024-06-11 06:14:44.037422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:13.524 [2024-06-11 06:14:44.050176] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:22:13.524 [2024-06-11 06:14:44.052424] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:13.524 06:14:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.460 06:14:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.719 06:14:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.719 "name": "raid_bdev1", 00:22:14.719 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:14.719 "strip_size_kb": 0, 00:22:14.719 "state": "online", 00:22:14.719 "raid_level": "raid1", 00:22:14.719 "superblock": false, 00:22:14.719 "num_base_bdevs": 4, 00:22:14.719 "num_base_bdevs_discovered": 4, 00:22:14.719 "num_base_bdevs_operational": 4, 00:22:14.719 "process": { 00:22:14.719 "type": "rebuild", 00:22:14.719 "target": "spare", 00:22:14.719 "progress": { 00:22:14.719 "blocks": 24576, 00:22:14.719 "percent": 37 00:22:14.719 } 00:22:14.719 }, 00:22:14.719 "base_bdevs_list": [ 00:22:14.719 { 00:22:14.719 "name": "spare", 00:22:14.719 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:14.719 "is_configured": true, 00:22:14.719 "data_offset": 0, 00:22:14.719 "data_size": 65536 00:22:14.719 }, 00:22:14.719 { 00:22:14.719 "name": "BaseBdev2", 00:22:14.719 "uuid": "2663e1ce-74ac-4393-9f48-47e33296814b", 00:22:14.719 "is_configured": true, 00:22:14.719 "data_offset": 0, 00:22:14.719 "data_size": 65536 00:22:14.719 }, 00:22:14.719 { 00:22:14.719 "name": "BaseBdev3", 00:22:14.719 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:14.719 "is_configured": true, 00:22:14.719 "data_offset": 0, 00:22:14.719 "data_size": 65536 00:22:14.719 }, 00:22:14.719 { 00:22:14.719 "name": "BaseBdev4", 00:22:14.719 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:14.719 "is_configured": true, 00:22:14.719 "data_offset": 0, 00:22:14.719 "data_size": 65536 00:22:14.719 } 00:22:14.719 ] 00:22:14.719 }' 00:22:14.719 06:14:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.719 06:14:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:14.719 06:14:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.979 06:14:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:14.979 06:14:45 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:14.979 06:14:45 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:14.979 06:14:45 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:14.979 06:14:45 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:14.979 06:14:45 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:14.979 [2024-06-11 06:14:45.598375] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:15.238 [2024-06-11 06:14:45.664474] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09890 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.238 "name": "raid_bdev1", 00:22:15.238 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:15.238 "strip_size_kb": 0, 00:22:15.238 "state": "online", 00:22:15.238 "raid_level": "raid1", 00:22:15.238 "superblock": false, 00:22:15.238 "num_base_bdevs": 4, 00:22:15.238 "num_base_bdevs_discovered": 3, 00:22:15.238 "num_base_bdevs_operational": 3, 00:22:15.238 "process": { 00:22:15.238 "type": "rebuild", 00:22:15.238 "target": "spare", 00:22:15.238 "progress": { 00:22:15.238 "blocks": 34816, 00:22:15.238 "percent": 53 00:22:15.238 } 00:22:15.238 }, 00:22:15.238 "base_bdevs_list": [ 00:22:15.238 { 00:22:15.238 "name": "spare", 00:22:15.238 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:15.238 "is_configured": true, 00:22:15.238 "data_offset": 0, 00:22:15.238 "data_size": 65536 00:22:15.238 }, 00:22:15.238 { 00:22:15.238 "name": null, 00:22:15.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.238 "is_configured": false, 00:22:15.238 "data_offset": 0, 00:22:15.238 "data_size": 65536 00:22:15.238 }, 00:22:15.238 { 00:22:15.238 "name": "BaseBdev3", 00:22:15.238 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:15.238 "is_configured": true, 00:22:15.238 "data_offset": 0, 00:22:15.238 "data_size": 65536 00:22:15.238 }, 00:22:15.238 { 00:22:15.238 "name": "BaseBdev4", 00:22:15.238 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:15.238 "is_configured": true, 00:22:15.238 "data_offset": 0, 00:22:15.238 "data_size": 65536 00:22:15.238 } 00:22:15.238 ] 00:22:15.238 }' 00:22:15.238 06:14:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@657 -- # local timeout=478 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.497 06:14:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.757 06:14:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.757 "name": "raid_bdev1", 00:22:15.757 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:15.757 "strip_size_kb": 0, 00:22:15.757 "state": "online", 00:22:15.757 "raid_level": "raid1", 00:22:15.757 "superblock": false, 00:22:15.757 "num_base_bdevs": 4, 00:22:15.757 "num_base_bdevs_discovered": 3, 00:22:15.757 "num_base_bdevs_operational": 3, 00:22:15.757 "process": { 00:22:15.757 "type": "rebuild", 00:22:15.757 "target": "spare", 00:22:15.757 "progress": { 00:22:15.757 "blocks": 40960, 00:22:15.757 "percent": 62 00:22:15.757 } 00:22:15.757 }, 00:22:15.757 "base_bdevs_list": [ 00:22:15.757 { 00:22:15.757 "name": "spare", 00:22:15.757 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:15.757 "is_configured": true, 00:22:15.757 "data_offset": 0, 00:22:15.757 "data_size": 65536 00:22:15.757 }, 00:22:15.757 { 00:22:15.757 "name": null, 00:22:15.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:15.757 "is_configured": false, 00:22:15.757 "data_offset": 0, 00:22:15.757 "data_size": 65536 00:22:15.757 }, 00:22:15.757 { 00:22:15.757 "name": "BaseBdev3", 00:22:15.757 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:15.757 "is_configured": true, 00:22:15.757 "data_offset": 0, 00:22:15.757 "data_size": 65536 00:22:15.757 }, 00:22:15.757 { 00:22:15.757 "name": "BaseBdev4", 00:22:15.757 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:15.757 "is_configured": true, 00:22:15.757 "data_offset": 0, 00:22:15.757 "data_size": 65536 00:22:15.757 } 00:22:15.757 ] 00:22:15.757 }' 00:22:15.757 06:14:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.757 06:14:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.757 06:14:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.757 06:14:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.757 06:14:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.694 06:14:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.694 [2024-06-11 06:14:47.283199] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:16.694 [2024-06-11 06:14:47.283272] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:16.694 [2024-06-11 06:14:47.283352] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:16.954 "name": "raid_bdev1", 00:22:16.954 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:16.954 "strip_size_kb": 0, 00:22:16.954 "state": "online", 00:22:16.954 "raid_level": "raid1", 00:22:16.954 "superblock": false, 00:22:16.954 "num_base_bdevs": 4, 00:22:16.954 "num_base_bdevs_discovered": 3, 00:22:16.954 "num_base_bdevs_operational": 3, 00:22:16.954 "base_bdevs_list": [ 00:22:16.954 { 00:22:16.954 "name": "spare", 00:22:16.954 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:16.954 "is_configured": true, 00:22:16.954 "data_offset": 0, 00:22:16.954 "data_size": 65536 00:22:16.954 }, 00:22:16.954 { 00:22:16.954 "name": null, 00:22:16.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.954 "is_configured": false, 00:22:16.954 "data_offset": 0, 00:22:16.954 "data_size": 65536 00:22:16.954 }, 00:22:16.954 { 00:22:16.954 "name": "BaseBdev3", 00:22:16.954 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:16.954 "is_configured": true, 00:22:16.954 "data_offset": 0, 00:22:16.954 "data_size": 65536 00:22:16.954 }, 00:22:16.954 { 00:22:16.954 "name": "BaseBdev4", 00:22:16.954 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:16.954 "is_configured": true, 00:22:16.954 "data_offset": 0, 00:22:16.954 "data_size": 65536 00:22:16.954 } 00:22:16.954 ] 00:22:16.954 }' 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@660 -- # break 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.954 06:14:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:17.213 "name": "raid_bdev1", 00:22:17.213 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:17.213 "strip_size_kb": 0, 00:22:17.213 "state": "online", 00:22:17.213 "raid_level": "raid1", 00:22:17.213 "superblock": false, 00:22:17.213 "num_base_bdevs": 4, 00:22:17.213 "num_base_bdevs_discovered": 3, 00:22:17.213 "num_base_bdevs_operational": 3, 00:22:17.213 "base_bdevs_list": [ 00:22:17.213 { 00:22:17.213 "name": "spare", 00:22:17.213 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 0, 00:22:17.213 "data_size": 65536 00:22:17.213 }, 00:22:17.213 { 00:22:17.213 "name": null, 00:22:17.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.213 "is_configured": false, 00:22:17.213 "data_offset": 0, 00:22:17.213 "data_size": 65536 00:22:17.213 }, 00:22:17.213 { 00:22:17.213 "name": "BaseBdev3", 00:22:17.213 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 0, 00:22:17.213 "data_size": 65536 00:22:17.213 }, 00:22:17.213 { 00:22:17.213 "name": "BaseBdev4", 00:22:17.213 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:17.213 "is_configured": true, 00:22:17.213 "data_offset": 0, 00:22:17.213 "data_size": 65536 00:22:17.213 } 00:22:17.213 ] 00:22:17.213 }' 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:17.213 06:14:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.214 06:14:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.783 06:14:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:17.783 "name": "raid_bdev1", 00:22:17.783 "uuid": "a33b6d48-95f3-4e4d-bcab-a1118de65547", 00:22:17.783 "strip_size_kb": 0, 00:22:17.783 "state": "online", 00:22:17.783 "raid_level": "raid1", 00:22:17.783 "superblock": false, 00:22:17.783 "num_base_bdevs": 4, 00:22:17.783 "num_base_bdevs_discovered": 3, 00:22:17.783 "num_base_bdevs_operational": 3, 00:22:17.783 "base_bdevs_list": [ 00:22:17.783 { 00:22:17.783 "name": "spare", 00:22:17.783 "uuid": "ad938e2d-64fa-500f-b167-6f2cbcbcd362", 00:22:17.783 "is_configured": true, 00:22:17.783 "data_offset": 0, 00:22:17.783 "data_size": 65536 00:22:17.783 }, 00:22:17.783 { 00:22:17.783 "name": null, 00:22:17.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.783 "is_configured": false, 00:22:17.783 "data_offset": 0, 00:22:17.783 "data_size": 65536 00:22:17.783 }, 00:22:17.783 { 00:22:17.783 "name": "BaseBdev3", 00:22:17.783 "uuid": "56847414-26f3-498a-ac10-79ba3600cc6c", 00:22:17.783 "is_configured": true, 00:22:17.783 "data_offset": 0, 00:22:17.783 "data_size": 65536 00:22:17.783 }, 00:22:17.783 { 00:22:17.783 "name": "BaseBdev4", 00:22:17.783 "uuid": "6d9c8e28-844d-4648-9bac-03d941d53b2f", 00:22:17.783 "is_configured": true, 00:22:17.783 "data_offset": 0, 00:22:17.783 "data_size": 65536 00:22:17.783 } 00:22:17.783 ] 00:22:17.783 }' 00:22:17.783 06:14:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:17.783 06:14:48 -- common/autotest_common.sh@10 -- # set +x 00:22:18.043 06:14:48 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:18.301 [2024-06-11 06:14:48.821160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:18.301 [2024-06-11 06:14:48.821205] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:18.301 [2024-06-11 06:14:48.821308] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:18.301 [2024-06-11 06:14:48.821400] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:18.301 [2024-06-11 06:14:48.821410] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:22:18.301 06:14:48 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:18.301 06:14:48 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.561 06:14:49 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:18.561 06:14:49 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:18.561 06:14:49 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@12 -- # local i 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:18.561 06:14:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:18.820 /dev/nbd0 00:22:18.820 06:14:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:18.820 06:14:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:18.820 06:14:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:18.820 06:14:49 -- common/autotest_common.sh@857 -- # local i 00:22:18.820 06:14:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:18.820 06:14:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:18.820 06:14:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:18.820 06:14:49 -- common/autotest_common.sh@861 -- # break 00:22:18.820 06:14:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:18.820 06:14:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:18.820 06:14:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:18.820 1+0 records in 00:22:18.820 1+0 records out 00:22:18.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307967 s, 13.3 MB/s 00:22:18.820 06:14:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.820 06:14:49 -- common/autotest_common.sh@874 -- # size=4096 00:22:18.820 06:14:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:18.820 06:14:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:18.820 06:14:49 -- common/autotest_common.sh@877 -- # return 0 00:22:18.820 06:14:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:18.820 06:14:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:18.820 06:14:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:19.079 /dev/nbd1 00:22:19.079 06:14:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:19.079 06:14:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:19.079 06:14:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:19.079 06:14:49 -- common/autotest_common.sh@857 -- # local i 00:22:19.079 06:14:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:19.079 06:14:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:19.079 06:14:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:19.079 06:14:49 -- common/autotest_common.sh@861 -- # break 00:22:19.079 06:14:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:19.079 06:14:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:19.079 06:14:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:19.079 1+0 records in 00:22:19.079 1+0 records out 00:22:19.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731772 s, 5.6 MB/s 00:22:19.079 06:14:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.079 06:14:49 -- common/autotest_common.sh@874 -- # size=4096 00:22:19.079 06:14:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:19.079 06:14:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:19.079 06:14:49 -- common/autotest_common.sh@877 -- # return 0 00:22:19.079 06:14:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:19.079 06:14:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:19.079 06:14:49 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:19.338 06:14:49 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:19.338 06:14:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:19.338 06:14:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:19.338 06:14:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:19.338 06:14:49 -- bdev/nbd_common.sh@51 -- # local i 00:22:19.338 06:14:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:19.338 06:14:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@41 -- # break 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:19.597 06:14:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@41 -- # break 00:22:19.856 06:14:50 -- bdev/nbd_common.sh@45 -- # return 0 00:22:19.856 06:14:50 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:19.856 06:14:50 -- bdev/bdev_raid.sh@709 -- # killprocess 125251 00:22:19.856 06:14:50 -- common/autotest_common.sh@926 -- # '[' -z 125251 ']' 00:22:19.856 06:14:50 -- common/autotest_common.sh@930 -- # kill -0 125251 00:22:19.856 06:14:50 -- common/autotest_common.sh@931 -- # uname 00:22:19.856 06:14:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.856 06:14:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125251 00:22:19.856 06:14:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:19.856 06:14:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:19.856 06:14:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125251' 00:22:19.856 killing process with pid 125251 00:22:19.856 06:14:50 -- common/autotest_common.sh@945 -- # kill 125251 00:22:19.856 Received shutdown signal, test time was about 60.000000 seconds 00:22:19.856 00:22:19.856 Latency(us) 00:22:19.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.856 =================================================================================================================== 00:22:19.856 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:19.856 [2024-06-11 06:14:50.346492] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:19.856 06:14:50 -- common/autotest_common.sh@950 -- # wait 125251 00:22:20.425 [2024-06-11 06:14:50.861366] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:21.804 ************************************ 00:22:21.804 END TEST raid_rebuild_test 00:22:21.804 ************************************ 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:21.804 00:22:21.804 real 0m21.538s 00:22:21.804 user 0m28.859s 00:22:21.804 sys 0m4.209s 00:22:21.804 06:14:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:21.804 06:14:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:22:21.804 06:14:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:21.804 06:14:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:21.804 06:14:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.804 ************************************ 00:22:21.804 START TEST raid_rebuild_test_sb 00:22:21.804 ************************************ 00:22:21.804 06:14:52 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:21.804 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@544 -- # raid_pid=125782 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125782 /var/tmp/spdk-raid.sock 00:22:21.805 06:14:52 -- common/autotest_common.sh@819 -- # '[' -z 125782 ']' 00:22:21.805 06:14:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:21.805 06:14:52 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:21.805 06:14:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:21.805 06:14:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:21.805 06:14:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:21.805 06:14:52 -- common/autotest_common.sh@10 -- # set +x 00:22:21.805 [2024-06-11 06:14:52.422258] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:21.805 [2024-06-11 06:14:52.422397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125782 ] 00:22:21.805 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:21.805 Zero copy mechanism will not be used. 00:22:22.064 [2024-06-11 06:14:52.584999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.323 [2024-06-11 06:14:52.829976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.581 [2024-06-11 06:14:53.059825] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:22.840 06:14:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:22.840 06:14:53 -- common/autotest_common.sh@852 -- # return 0 00:22:22.840 06:14:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:22.840 06:14:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:22.840 06:14:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:23.099 BaseBdev1_malloc 00:22:23.099 06:14:53 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:23.358 [2024-06-11 06:14:53.782213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:23.358 [2024-06-11 06:14:53.782333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.358 [2024-06-11 06:14:53.782376] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:23.358 [2024-06-11 06:14:53.782427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.358 [2024-06-11 06:14:53.785302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.358 [2024-06-11 06:14:53.785368] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:23.358 BaseBdev1 00:22:23.358 06:14:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:23.358 06:14:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:23.358 06:14:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:23.617 BaseBdev2_malloc 00:22:23.617 06:14:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:23.875 [2024-06-11 06:14:54.325145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:23.875 [2024-06-11 06:14:54.325271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.875 [2024-06-11 06:14:54.325320] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:23.875 [2024-06-11 06:14:54.325379] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.875 [2024-06-11 06:14:54.328070] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.875 [2024-06-11 06:14:54.328137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:23.875 BaseBdev2 00:22:23.875 06:14:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:23.876 06:14:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:23.876 06:14:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:24.134 BaseBdev3_malloc 00:22:24.134 06:14:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:24.393 [2024-06-11 06:14:54.781516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:24.393 [2024-06-11 06:14:54.781620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.393 [2024-06-11 06:14:54.781666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:24.393 [2024-06-11 06:14:54.781711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.393 [2024-06-11 06:14:54.784351] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.393 [2024-06-11 06:14:54.784407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:24.393 BaseBdev3 00:22:24.393 06:14:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:24.393 06:14:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:24.393 06:14:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:24.652 BaseBdev4_malloc 00:22:24.652 06:14:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:24.911 [2024-06-11 06:14:55.302821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:24.911 [2024-06-11 06:14:55.302934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:24.911 [2024-06-11 06:14:55.302973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:24.911 [2024-06-11 06:14:55.303021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:24.911 [2024-06-11 06:14:55.305724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:24.911 [2024-06-11 06:14:55.305798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:24.911 BaseBdev4 00:22:24.911 06:14:55 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:24.911 spare_malloc 00:22:24.911 06:14:55 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:25.170 spare_delay 00:22:25.170 06:14:55 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:25.430 [2024-06-11 06:14:55.891290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:25.430 [2024-06-11 06:14:55.891402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:25.430 [2024-06-11 06:14:55.891439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:25.430 [2024-06-11 06:14:55.891486] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:25.430 [2024-06-11 06:14:55.894170] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:25.430 [2024-06-11 06:14:55.894230] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:25.430 spare 00:22:25.430 06:14:55 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:25.430 [2024-06-11 06:14:56.075447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:25.689 [2024-06-11 06:14:56.077746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.689 [2024-06-11 06:14:56.077831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.689 [2024-06-11 06:14:56.077884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.689 [2024-06-11 06:14:56.078091] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:22:25.689 [2024-06-11 06:14:56.078101] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:25.689 [2024-06-11 06:14:56.078244] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:25.689 [2024-06-11 06:14:56.078625] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:22:25.689 [2024-06-11 06:14:56.078643] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:22:25.689 [2024-06-11 06:14:56.078831] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.689 06:14:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.947 06:14:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:25.947 "name": "raid_bdev1", 00:22:25.947 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:25.947 "strip_size_kb": 0, 00:22:25.947 "state": "online", 00:22:25.947 "raid_level": "raid1", 00:22:25.947 "superblock": true, 00:22:25.947 "num_base_bdevs": 4, 00:22:25.947 "num_base_bdevs_discovered": 4, 00:22:25.947 "num_base_bdevs_operational": 4, 00:22:25.947 "base_bdevs_list": [ 00:22:25.947 { 00:22:25.947 "name": "BaseBdev1", 00:22:25.947 "uuid": "f4ae2dc5-a437-5ff5-a212-bdbb99410b59", 00:22:25.947 "is_configured": true, 00:22:25.947 "data_offset": 2048, 00:22:25.947 "data_size": 63488 00:22:25.947 }, 00:22:25.947 { 00:22:25.947 "name": "BaseBdev2", 00:22:25.947 "uuid": "c4dd5fd1-7110-5e3b-a421-91c5c0132208", 00:22:25.947 "is_configured": true, 00:22:25.947 "data_offset": 2048, 00:22:25.947 "data_size": 63488 00:22:25.947 }, 00:22:25.947 { 00:22:25.947 "name": "BaseBdev3", 00:22:25.947 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:25.947 "is_configured": true, 00:22:25.947 "data_offset": 2048, 00:22:25.947 "data_size": 63488 00:22:25.947 }, 00:22:25.947 { 00:22:25.947 "name": "BaseBdev4", 00:22:25.947 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:25.948 "is_configured": true, 00:22:25.948 "data_offset": 2048, 00:22:25.948 "data_size": 63488 00:22:25.948 } 00:22:25.948 ] 00:22:25.948 }' 00:22:25.948 06:14:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:25.948 06:14:56 -- common/autotest_common.sh@10 -- # set +x 00:22:26.206 06:14:56 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:26.206 06:14:56 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:26.466 [2024-06-11 06:14:56.991715] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:26.466 06:14:57 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:26.466 06:14:57 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.466 06:14:57 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:26.726 06:14:57 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:26.726 06:14:57 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:26.726 06:14:57 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:26.726 06:14:57 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@12 -- # local i 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.726 06:14:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:26.985 [2024-06-11 06:14:57.395655] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:26.985 /dev/nbd0 00:22:26.985 06:14:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:26.985 06:14:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:26.985 06:14:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:26.985 06:14:57 -- common/autotest_common.sh@857 -- # local i 00:22:26.985 06:14:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:26.985 06:14:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:26.985 06:14:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:26.985 06:14:57 -- common/autotest_common.sh@861 -- # break 00:22:26.985 06:14:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:26.985 06:14:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:26.985 06:14:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:26.985 1+0 records in 00:22:26.985 1+0 records out 00:22:26.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521904 s, 7.8 MB/s 00:22:26.985 06:14:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.985 06:14:57 -- common/autotest_common.sh@874 -- # size=4096 00:22:26.985 06:14:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:26.985 06:14:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:26.985 06:14:57 -- common/autotest_common.sh@877 -- # return 0 00:22:26.985 06:14:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:26.985 06:14:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:26.985 06:14:57 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:26.985 06:14:57 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:26.985 06:14:57 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:32.298 63488+0 records in 00:22:32.298 63488+0 records out 00:22:32.298 32505856 bytes (33 MB, 31 MiB) copied, 5.13005 s, 6.3 MB/s 00:22:32.298 06:15:02 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@51 -- # local i 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:32.298 [2024-06-11 06:15:02.829313] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@41 -- # break 00:22:32.298 06:15:02 -- bdev/nbd_common.sh@45 -- # return 0 00:22:32.298 06:15:02 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:32.558 [2024-06-11 06:15:03.024902] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.558 06:15:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.817 06:15:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:32.817 "name": "raid_bdev1", 00:22:32.817 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:32.817 "strip_size_kb": 0, 00:22:32.817 "state": "online", 00:22:32.817 "raid_level": "raid1", 00:22:32.817 "superblock": true, 00:22:32.817 "num_base_bdevs": 4, 00:22:32.817 "num_base_bdevs_discovered": 3, 00:22:32.817 "num_base_bdevs_operational": 3, 00:22:32.817 "base_bdevs_list": [ 00:22:32.817 { 00:22:32.817 "name": null, 00:22:32.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.817 "is_configured": false, 00:22:32.817 "data_offset": 2048, 00:22:32.817 "data_size": 63488 00:22:32.817 }, 00:22:32.817 { 00:22:32.817 "name": "BaseBdev2", 00:22:32.817 "uuid": "c4dd5fd1-7110-5e3b-a421-91c5c0132208", 00:22:32.817 "is_configured": true, 00:22:32.817 "data_offset": 2048, 00:22:32.817 "data_size": 63488 00:22:32.817 }, 00:22:32.817 { 00:22:32.817 "name": "BaseBdev3", 00:22:32.817 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:32.817 "is_configured": true, 00:22:32.817 "data_offset": 2048, 00:22:32.817 "data_size": 63488 00:22:32.817 }, 00:22:32.817 { 00:22:32.817 "name": "BaseBdev4", 00:22:32.817 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:32.817 "is_configured": true, 00:22:32.817 "data_offset": 2048, 00:22:32.817 "data_size": 63488 00:22:32.817 } 00:22:32.817 ] 00:22:32.817 }' 00:22:32.817 06:15:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:32.817 06:15:03 -- common/autotest_common.sh@10 -- # set +x 00:22:33.385 06:15:03 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:33.644 [2024-06-11 06:15:04.057068] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:33.644 [2024-06-11 06:15:04.057140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:33.644 [2024-06-11 06:15:04.070331] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:22:33.644 [2024-06-11 06:15:04.072624] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.644 06:15:04 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.581 06:15:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.840 06:15:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.840 "name": "raid_bdev1", 00:22:34.840 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:34.840 "strip_size_kb": 0, 00:22:34.840 "state": "online", 00:22:34.840 "raid_level": "raid1", 00:22:34.840 "superblock": true, 00:22:34.840 "num_base_bdevs": 4, 00:22:34.840 "num_base_bdevs_discovered": 4, 00:22:34.840 "num_base_bdevs_operational": 4, 00:22:34.840 "process": { 00:22:34.840 "type": "rebuild", 00:22:34.840 "target": "spare", 00:22:34.840 "progress": { 00:22:34.840 "blocks": 24576, 00:22:34.840 "percent": 38 00:22:34.840 } 00:22:34.840 }, 00:22:34.840 "base_bdevs_list": [ 00:22:34.840 { 00:22:34.840 "name": "spare", 00:22:34.840 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:34.840 "is_configured": true, 00:22:34.840 "data_offset": 2048, 00:22:34.841 "data_size": 63488 00:22:34.841 }, 00:22:34.841 { 00:22:34.841 "name": "BaseBdev2", 00:22:34.841 "uuid": "c4dd5fd1-7110-5e3b-a421-91c5c0132208", 00:22:34.841 "is_configured": true, 00:22:34.841 "data_offset": 2048, 00:22:34.841 "data_size": 63488 00:22:34.841 }, 00:22:34.841 { 00:22:34.841 "name": "BaseBdev3", 00:22:34.841 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:34.841 "is_configured": true, 00:22:34.841 "data_offset": 2048, 00:22:34.841 "data_size": 63488 00:22:34.841 }, 00:22:34.841 { 00:22:34.841 "name": "BaseBdev4", 00:22:34.841 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:34.841 "is_configured": true, 00:22:34.841 "data_offset": 2048, 00:22:34.841 "data_size": 63488 00:22:34.841 } 00:22:34.841 ] 00:22:34.841 }' 00:22:34.841 06:15:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.841 06:15:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:34.841 06:15:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.841 06:15:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:34.841 06:15:05 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:35.125 [2024-06-11 06:15:05.650898] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.125 [2024-06-11 06:15:05.684139] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:35.125 [2024-06-11 06:15:05.684228] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.125 06:15:05 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:35.125 06:15:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.126 06:15:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.384 06:15:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:35.384 "name": "raid_bdev1", 00:22:35.384 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:35.384 "strip_size_kb": 0, 00:22:35.384 "state": "online", 00:22:35.384 "raid_level": "raid1", 00:22:35.384 "superblock": true, 00:22:35.384 "num_base_bdevs": 4, 00:22:35.384 "num_base_bdevs_discovered": 3, 00:22:35.384 "num_base_bdevs_operational": 3, 00:22:35.384 "base_bdevs_list": [ 00:22:35.384 { 00:22:35.384 "name": null, 00:22:35.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.384 "is_configured": false, 00:22:35.384 "data_offset": 2048, 00:22:35.384 "data_size": 63488 00:22:35.384 }, 00:22:35.384 { 00:22:35.384 "name": "BaseBdev2", 00:22:35.384 "uuid": "c4dd5fd1-7110-5e3b-a421-91c5c0132208", 00:22:35.384 "is_configured": true, 00:22:35.384 "data_offset": 2048, 00:22:35.384 "data_size": 63488 00:22:35.384 }, 00:22:35.384 { 00:22:35.384 "name": "BaseBdev3", 00:22:35.384 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:35.384 "is_configured": true, 00:22:35.384 "data_offset": 2048, 00:22:35.384 "data_size": 63488 00:22:35.384 }, 00:22:35.384 { 00:22:35.384 "name": "BaseBdev4", 00:22:35.384 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:35.384 "is_configured": true, 00:22:35.384 "data_offset": 2048, 00:22:35.384 "data_size": 63488 00:22:35.384 } 00:22:35.384 ] 00:22:35.384 }' 00:22:35.384 06:15:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:35.384 06:15:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.953 06:15:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.212 06:15:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.212 "name": "raid_bdev1", 00:22:36.212 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:36.212 "strip_size_kb": 0, 00:22:36.212 "state": "online", 00:22:36.212 "raid_level": "raid1", 00:22:36.212 "superblock": true, 00:22:36.212 "num_base_bdevs": 4, 00:22:36.212 "num_base_bdevs_discovered": 3, 00:22:36.212 "num_base_bdevs_operational": 3, 00:22:36.212 "base_bdevs_list": [ 00:22:36.212 { 00:22:36.212 "name": null, 00:22:36.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.212 "is_configured": false, 00:22:36.212 "data_offset": 2048, 00:22:36.212 "data_size": 63488 00:22:36.212 }, 00:22:36.212 { 00:22:36.212 "name": "BaseBdev2", 00:22:36.212 "uuid": "c4dd5fd1-7110-5e3b-a421-91c5c0132208", 00:22:36.212 "is_configured": true, 00:22:36.212 "data_offset": 2048, 00:22:36.212 "data_size": 63488 00:22:36.212 }, 00:22:36.212 { 00:22:36.212 "name": "BaseBdev3", 00:22:36.212 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:36.212 "is_configured": true, 00:22:36.212 "data_offset": 2048, 00:22:36.212 "data_size": 63488 00:22:36.212 }, 00:22:36.212 { 00:22:36.212 "name": "BaseBdev4", 00:22:36.212 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:36.212 "is_configured": true, 00:22:36.212 "data_offset": 2048, 00:22:36.212 "data_size": 63488 00:22:36.212 } 00:22:36.212 ] 00:22:36.212 }' 00:22:36.212 06:15:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.212 06:15:06 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:36.212 06:15:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.212 06:15:06 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:36.212 06:15:06 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:36.471 [2024-06-11 06:15:07.041928] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:36.471 [2024-06-11 06:15:07.041983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:36.471 [2024-06-11 06:15:07.054998] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:22:36.471 [2024-06-11 06:15:07.057288] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:36.471 06:15:07 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:37.852 "name": "raid_bdev1", 00:22:37.852 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:37.852 "strip_size_kb": 0, 00:22:37.852 "state": "online", 00:22:37.852 "raid_level": "raid1", 00:22:37.852 "superblock": true, 00:22:37.852 "num_base_bdevs": 4, 00:22:37.852 "num_base_bdevs_discovered": 4, 00:22:37.852 "num_base_bdevs_operational": 4, 00:22:37.852 "process": { 00:22:37.852 "type": "rebuild", 00:22:37.852 "target": "spare", 00:22:37.852 "progress": { 00:22:37.852 "blocks": 24576, 00:22:37.852 "percent": 38 00:22:37.852 } 00:22:37.852 }, 00:22:37.852 "base_bdevs_list": [ 00:22:37.852 { 00:22:37.852 "name": "spare", 00:22:37.852 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:37.852 "is_configured": true, 00:22:37.852 "data_offset": 2048, 00:22:37.852 "data_size": 63488 00:22:37.852 }, 00:22:37.852 { 00:22:37.852 "name": "BaseBdev2", 00:22:37.852 "uuid": "c4dd5fd1-7110-5e3b-a421-91c5c0132208", 00:22:37.852 "is_configured": true, 00:22:37.852 "data_offset": 2048, 00:22:37.852 "data_size": 63488 00:22:37.852 }, 00:22:37.852 { 00:22:37.852 "name": "BaseBdev3", 00:22:37.852 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:37.852 "is_configured": true, 00:22:37.852 "data_offset": 2048, 00:22:37.852 "data_size": 63488 00:22:37.852 }, 00:22:37.852 { 00:22:37.852 "name": "BaseBdev4", 00:22:37.852 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:37.852 "is_configured": true, 00:22:37.852 "data_offset": 2048, 00:22:37.852 "data_size": 63488 00:22:37.852 } 00:22:37.852 ] 00:22:37.852 }' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:37.852 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:37.852 06:15:08 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:38.110 [2024-06-11 06:15:08.587257] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:38.110 [2024-06-11 06:15:08.669390] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.369 06:15:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:38.628 "name": "raid_bdev1", 00:22:38.628 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:38.628 "strip_size_kb": 0, 00:22:38.628 "state": "online", 00:22:38.628 "raid_level": "raid1", 00:22:38.628 "superblock": true, 00:22:38.628 "num_base_bdevs": 4, 00:22:38.628 "num_base_bdevs_discovered": 3, 00:22:38.628 "num_base_bdevs_operational": 3, 00:22:38.628 "process": { 00:22:38.628 "type": "rebuild", 00:22:38.628 "target": "spare", 00:22:38.628 "progress": { 00:22:38.628 "blocks": 38912, 00:22:38.628 "percent": 61 00:22:38.628 } 00:22:38.628 }, 00:22:38.628 "base_bdevs_list": [ 00:22:38.628 { 00:22:38.628 "name": "spare", 00:22:38.628 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:38.628 "is_configured": true, 00:22:38.628 "data_offset": 2048, 00:22:38.628 "data_size": 63488 00:22:38.628 }, 00:22:38.628 { 00:22:38.628 "name": null, 00:22:38.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.628 "is_configured": false, 00:22:38.628 "data_offset": 2048, 00:22:38.628 "data_size": 63488 00:22:38.628 }, 00:22:38.628 { 00:22:38.628 "name": "BaseBdev3", 00:22:38.628 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:38.628 "is_configured": true, 00:22:38.628 "data_offset": 2048, 00:22:38.628 "data_size": 63488 00:22:38.628 }, 00:22:38.628 { 00:22:38.628 "name": "BaseBdev4", 00:22:38.628 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:38.628 "is_configured": true, 00:22:38.628 "data_offset": 2048, 00:22:38.628 "data_size": 63488 00:22:38.628 } 00:22:38.628 ] 00:22:38.628 }' 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@657 -- # local timeout=502 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.628 06:15:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.888 06:15:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:38.888 "name": "raid_bdev1", 00:22:38.888 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:38.888 "strip_size_kb": 0, 00:22:38.888 "state": "online", 00:22:38.888 "raid_level": "raid1", 00:22:38.888 "superblock": true, 00:22:38.888 "num_base_bdevs": 4, 00:22:38.888 "num_base_bdevs_discovered": 3, 00:22:38.888 "num_base_bdevs_operational": 3, 00:22:38.888 "process": { 00:22:38.888 "type": "rebuild", 00:22:38.888 "target": "spare", 00:22:38.888 "progress": { 00:22:38.888 "blocks": 45056, 00:22:38.888 "percent": 70 00:22:38.888 } 00:22:38.888 }, 00:22:38.888 "base_bdevs_list": [ 00:22:38.888 { 00:22:38.888 "name": "spare", 00:22:38.888 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:38.888 "is_configured": true, 00:22:38.888 "data_offset": 2048, 00:22:38.888 "data_size": 63488 00:22:38.888 }, 00:22:38.888 { 00:22:38.888 "name": null, 00:22:38.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.888 "is_configured": false, 00:22:38.888 "data_offset": 2048, 00:22:38.888 "data_size": 63488 00:22:38.888 }, 00:22:38.888 { 00:22:38.888 "name": "BaseBdev3", 00:22:38.888 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:38.888 "is_configured": true, 00:22:38.888 "data_offset": 2048, 00:22:38.888 "data_size": 63488 00:22:38.888 }, 00:22:38.888 { 00:22:38.888 "name": "BaseBdev4", 00:22:38.888 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:38.888 "is_configured": true, 00:22:38.888 "data_offset": 2048, 00:22:38.888 "data_size": 63488 00:22:38.888 } 00:22:38.888 ] 00:22:38.888 }' 00:22:38.888 06:15:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:38.888 06:15:09 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.888 06:15:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:38.888 06:15:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.888 06:15:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:39.825 [2024-06-11 06:15:10.180313] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:39.825 [2024-06-11 06:15:10.180405] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:39.825 [2024-06-11 06:15:10.180591] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.825 06:15:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.085 06:15:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.085 "name": "raid_bdev1", 00:22:40.085 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:40.085 "strip_size_kb": 0, 00:22:40.085 "state": "online", 00:22:40.085 "raid_level": "raid1", 00:22:40.085 "superblock": true, 00:22:40.085 "num_base_bdevs": 4, 00:22:40.085 "num_base_bdevs_discovered": 3, 00:22:40.085 "num_base_bdevs_operational": 3, 00:22:40.085 "base_bdevs_list": [ 00:22:40.085 { 00:22:40.085 "name": "spare", 00:22:40.085 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:40.085 "is_configured": true, 00:22:40.085 "data_offset": 2048, 00:22:40.085 "data_size": 63488 00:22:40.085 }, 00:22:40.085 { 00:22:40.085 "name": null, 00:22:40.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.085 "is_configured": false, 00:22:40.085 "data_offset": 2048, 00:22:40.085 "data_size": 63488 00:22:40.085 }, 00:22:40.085 { 00:22:40.085 "name": "BaseBdev3", 00:22:40.085 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:40.085 "is_configured": true, 00:22:40.085 "data_offset": 2048, 00:22:40.085 "data_size": 63488 00:22:40.085 }, 00:22:40.085 { 00:22:40.085 "name": "BaseBdev4", 00:22:40.085 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:40.085 "is_configured": true, 00:22:40.085 "data_offset": 2048, 00:22:40.085 "data_size": 63488 00:22:40.085 } 00:22:40.085 ] 00:22:40.085 }' 00:22:40.085 06:15:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.085 06:15:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:40.085 06:15:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@660 -- # break 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.344 06:15:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.344 "name": "raid_bdev1", 00:22:40.344 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:40.344 "strip_size_kb": 0, 00:22:40.344 "state": "online", 00:22:40.344 "raid_level": "raid1", 00:22:40.344 "superblock": true, 00:22:40.344 "num_base_bdevs": 4, 00:22:40.344 "num_base_bdevs_discovered": 3, 00:22:40.344 "num_base_bdevs_operational": 3, 00:22:40.344 "base_bdevs_list": [ 00:22:40.344 { 00:22:40.344 "name": "spare", 00:22:40.344 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:40.344 "is_configured": true, 00:22:40.344 "data_offset": 2048, 00:22:40.344 "data_size": 63488 00:22:40.344 }, 00:22:40.344 { 00:22:40.344 "name": null, 00:22:40.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.344 "is_configured": false, 00:22:40.344 "data_offset": 2048, 00:22:40.344 "data_size": 63488 00:22:40.344 }, 00:22:40.344 { 00:22:40.344 "name": "BaseBdev3", 00:22:40.344 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:40.344 "is_configured": true, 00:22:40.344 "data_offset": 2048, 00:22:40.344 "data_size": 63488 00:22:40.344 }, 00:22:40.344 { 00:22:40.344 "name": "BaseBdev4", 00:22:40.344 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:40.344 "is_configured": true, 00:22:40.344 "data_offset": 2048, 00:22:40.344 "data_size": 63488 00:22:40.344 } 00:22:40.344 ] 00:22:40.345 }' 00:22:40.345 06:15:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.604 06:15:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.863 06:15:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.863 "name": "raid_bdev1", 00:22:40.863 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:40.863 "strip_size_kb": 0, 00:22:40.863 "state": "online", 00:22:40.863 "raid_level": "raid1", 00:22:40.863 "superblock": true, 00:22:40.863 "num_base_bdevs": 4, 00:22:40.863 "num_base_bdevs_discovered": 3, 00:22:40.863 "num_base_bdevs_operational": 3, 00:22:40.863 "base_bdevs_list": [ 00:22:40.863 { 00:22:40.863 "name": "spare", 00:22:40.863 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:40.863 "is_configured": true, 00:22:40.863 "data_offset": 2048, 00:22:40.863 "data_size": 63488 00:22:40.863 }, 00:22:40.863 { 00:22:40.863 "name": null, 00:22:40.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.863 "is_configured": false, 00:22:40.863 "data_offset": 2048, 00:22:40.863 "data_size": 63488 00:22:40.863 }, 00:22:40.863 { 00:22:40.863 "name": "BaseBdev3", 00:22:40.863 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:40.863 "is_configured": true, 00:22:40.863 "data_offset": 2048, 00:22:40.863 "data_size": 63488 00:22:40.863 }, 00:22:40.863 { 00:22:40.863 "name": "BaseBdev4", 00:22:40.863 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:40.863 "is_configured": true, 00:22:40.863 "data_offset": 2048, 00:22:40.863 "data_size": 63488 00:22:40.863 } 00:22:40.863 ] 00:22:40.863 }' 00:22:40.864 06:15:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.864 06:15:11 -- common/autotest_common.sh@10 -- # set +x 00:22:41.432 06:15:11 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:41.691 [2024-06-11 06:15:12.138876] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.691 [2024-06-11 06:15:12.138916] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.691 [2024-06-11 06:15:12.139034] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.691 [2024-06-11 06:15:12.139131] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.691 [2024-06-11 06:15:12.139140] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:41.691 06:15:12 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.691 06:15:12 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:41.950 06:15:12 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:41.950 06:15:12 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:41.950 06:15:12 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@12 -- # local i 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:41.950 06:15:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:42.210 /dev/nbd0 00:22:42.210 06:15:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:42.210 06:15:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:42.210 06:15:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:42.210 06:15:12 -- common/autotest_common.sh@857 -- # local i 00:22:42.210 06:15:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:42.210 06:15:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:42.210 06:15:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:42.210 06:15:12 -- common/autotest_common.sh@861 -- # break 00:22:42.210 06:15:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:42.210 06:15:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:42.210 06:15:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.210 1+0 records in 00:22:42.210 1+0 records out 00:22:42.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193984 s, 21.1 MB/s 00:22:42.210 06:15:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.210 06:15:12 -- common/autotest_common.sh@874 -- # size=4096 00:22:42.210 06:15:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.210 06:15:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:42.210 06:15:12 -- common/autotest_common.sh@877 -- # return 0 00:22:42.210 06:15:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.210 06:15:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:42.210 06:15:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:42.469 /dev/nbd1 00:22:42.469 06:15:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:42.469 06:15:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:42.469 06:15:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:42.469 06:15:12 -- common/autotest_common.sh@857 -- # local i 00:22:42.469 06:15:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:42.469 06:15:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:42.469 06:15:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:42.469 06:15:12 -- common/autotest_common.sh@861 -- # break 00:22:42.469 06:15:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:42.469 06:15:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:42.469 06:15:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.469 1+0 records in 00:22:42.469 1+0 records out 00:22:42.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040165 s, 10.2 MB/s 00:22:42.469 06:15:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.469 06:15:12 -- common/autotest_common.sh@874 -- # size=4096 00:22:42.469 06:15:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.470 06:15:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:42.470 06:15:12 -- common/autotest_common.sh@877 -- # return 0 00:22:42.470 06:15:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.470 06:15:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:42.470 06:15:12 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:42.470 06:15:13 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:42.470 06:15:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.470 06:15:13 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:42.470 06:15:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.729 06:15:13 -- bdev/nbd_common.sh@51 -- # local i 00:22:42.729 06:15:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.729 06:15:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@41 -- # break 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.988 06:15:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@41 -- # break 00:22:43.247 06:15:13 -- bdev/nbd_common.sh@45 -- # return 0 00:22:43.247 06:15:13 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:43.247 06:15:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:43.247 06:15:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:43.247 06:15:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:43.507 06:15:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:43.507 [2024-06-11 06:15:14.120249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:43.507 [2024-06-11 06:15:14.120351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:43.507 [2024-06-11 06:15:14.120400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:22:43.507 [2024-06-11 06:15:14.120423] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:43.507 [2024-06-11 06:15:14.123196] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:43.507 [2024-06-11 06:15:14.123263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:43.507 [2024-06-11 06:15:14.123395] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:43.507 [2024-06-11 06:15:14.123446] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.507 BaseBdev1 00:22:43.507 06:15:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:43.507 06:15:14 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:43.507 06:15:14 -- bdev/bdev_raid.sh@696 -- # continue 00:22:43.507 06:15:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:43.507 06:15:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:43.507 06:15:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:43.766 06:15:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:44.026 [2024-06-11 06:15:14.520325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:44.026 [2024-06-11 06:15:14.520464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.026 [2024-06-11 06:15:14.520516] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:22:44.026 [2024-06-11 06:15:14.520539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.026 [2024-06-11 06:15:14.521069] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.026 [2024-06-11 06:15:14.521138] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:44.026 [2024-06-11 06:15:14.521272] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:44.026 [2024-06-11 06:15:14.521284] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:44.026 [2024-06-11 06:15:14.521291] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:44.026 [2024-06-11 06:15:14.521316] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:22:44.026 [2024-06-11 06:15:14.521415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:44.026 BaseBdev3 00:22:44.026 06:15:14 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:44.026 06:15:14 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:44.026 06:15:14 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:44.285 06:15:14 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:44.285 [2024-06-11 06:15:14.864354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:44.285 [2024-06-11 06:15:14.864441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.285 [2024-06-11 06:15:14.864495] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:44.285 [2024-06-11 06:15:14.864523] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.285 [2024-06-11 06:15:14.865051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.285 [2024-06-11 06:15:14.865111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:44.285 [2024-06-11 06:15:14.865211] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:44.285 [2024-06-11 06:15:14.865233] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:44.285 BaseBdev4 00:22:44.285 06:15:14 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:44.543 06:15:15 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:44.803 [2024-06-11 06:15:15.208411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:44.803 [2024-06-11 06:15:15.208498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:44.803 [2024-06-11 06:15:15.208552] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:44.803 [2024-06-11 06:15:15.208579] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:44.803 [2024-06-11 06:15:15.209119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:44.803 [2024-06-11 06:15:15.209176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:44.803 [2024-06-11 06:15:15.209309] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:44.803 [2024-06-11 06:15:15.209344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:44.803 spare 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.803 [2024-06-11 06:15:15.309462] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:22:44.803 [2024-06-11 06:15:15.309491] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:44.803 [2024-06-11 06:15:15.309677] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:22:44.803 [2024-06-11 06:15:15.310138] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:22:44.803 [2024-06-11 06:15:15.310158] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:22:44.803 [2024-06-11 06:15:15.310316] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:44.803 "name": "raid_bdev1", 00:22:44.803 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:44.803 "strip_size_kb": 0, 00:22:44.803 "state": "online", 00:22:44.803 "raid_level": "raid1", 00:22:44.803 "superblock": true, 00:22:44.803 "num_base_bdevs": 4, 00:22:44.803 "num_base_bdevs_discovered": 3, 00:22:44.803 "num_base_bdevs_operational": 3, 00:22:44.803 "base_bdevs_list": [ 00:22:44.803 { 00:22:44.803 "name": "spare", 00:22:44.803 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:44.803 "is_configured": true, 00:22:44.803 "data_offset": 2048, 00:22:44.803 "data_size": 63488 00:22:44.803 }, 00:22:44.803 { 00:22:44.803 "name": null, 00:22:44.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.803 "is_configured": false, 00:22:44.803 "data_offset": 2048, 00:22:44.803 "data_size": 63488 00:22:44.803 }, 00:22:44.803 { 00:22:44.803 "name": "BaseBdev3", 00:22:44.803 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:44.803 "is_configured": true, 00:22:44.803 "data_offset": 2048, 00:22:44.803 "data_size": 63488 00:22:44.803 }, 00:22:44.803 { 00:22:44.803 "name": "BaseBdev4", 00:22:44.803 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:44.803 "is_configured": true, 00:22:44.803 "data_offset": 2048, 00:22:44.803 "data_size": 63488 00:22:44.803 } 00:22:44.803 ] 00:22:44.803 }' 00:22:44.803 06:15:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:44.803 06:15:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.371 06:15:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.631 "name": "raid_bdev1", 00:22:45.631 "uuid": "e63e64ac-7a36-4817-8b2e-69f1c60e6914", 00:22:45.631 "strip_size_kb": 0, 00:22:45.631 "state": "online", 00:22:45.631 "raid_level": "raid1", 00:22:45.631 "superblock": true, 00:22:45.631 "num_base_bdevs": 4, 00:22:45.631 "num_base_bdevs_discovered": 3, 00:22:45.631 "num_base_bdevs_operational": 3, 00:22:45.631 "base_bdevs_list": [ 00:22:45.631 { 00:22:45.631 "name": "spare", 00:22:45.631 "uuid": "e355bbc0-31a0-5517-8c09-7ec82fd559b4", 00:22:45.631 "is_configured": true, 00:22:45.631 "data_offset": 2048, 00:22:45.631 "data_size": 63488 00:22:45.631 }, 00:22:45.631 { 00:22:45.631 "name": null, 00:22:45.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.631 "is_configured": false, 00:22:45.631 "data_offset": 2048, 00:22:45.631 "data_size": 63488 00:22:45.631 }, 00:22:45.631 { 00:22:45.631 "name": "BaseBdev3", 00:22:45.631 "uuid": "1da96704-9dff-5041-89cd-3a49781b2299", 00:22:45.631 "is_configured": true, 00:22:45.631 "data_offset": 2048, 00:22:45.631 "data_size": 63488 00:22:45.631 }, 00:22:45.631 { 00:22:45.631 "name": "BaseBdev4", 00:22:45.631 "uuid": "ee68c3ed-f309-5c22-abc7-ffab2f536de6", 00:22:45.631 "is_configured": true, 00:22:45.631 "data_offset": 2048, 00:22:45.631 "data_size": 63488 00:22:45.631 } 00:22:45.631 ] 00:22:45.631 }' 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.631 06:15:16 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:45.890 06:15:16 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.890 06:15:16 -- bdev/bdev_raid.sh@709 -- # killprocess 125782 00:22:45.890 06:15:16 -- common/autotest_common.sh@926 -- # '[' -z 125782 ']' 00:22:45.890 06:15:16 -- common/autotest_common.sh@930 -- # kill -0 125782 00:22:45.890 06:15:16 -- common/autotest_common.sh@931 -- # uname 00:22:45.890 06:15:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:45.890 06:15:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125782 00:22:45.890 06:15:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:45.890 06:15:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:45.890 06:15:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125782' 00:22:45.890 killing process with pid 125782 00:22:45.890 06:15:16 -- common/autotest_common.sh@945 -- # kill 125782 00:22:45.890 Received shutdown signal, test time was about 60.000000 seconds 00:22:45.890 00:22:45.890 Latency(us) 00:22:45.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.890 =================================================================================================================== 00:22:45.890 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:45.890 [2024-06-11 06:15:16.447834] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.890 [2024-06-11 06:15:16.447940] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.890 [2024-06-11 06:15:16.448034] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.890 [2024-06-11 06:15:16.448044] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:22:45.890 06:15:16 -- common/autotest_common.sh@950 -- # wait 125782 00:22:46.459 [2024-06-11 06:15:16.969160] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:47.876 ************************************ 00:22:47.876 END TEST raid_rebuild_test_sb 00:22:47.876 ************************************ 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:47.876 00:22:47.876 real 0m26.022s 00:22:47.876 user 0m36.441s 00:22:47.876 sys 0m4.912s 00:22:47.876 06:15:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:47.876 06:15:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:47.876 06:15:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:47.876 06:15:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:47.876 06:15:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.876 ************************************ 00:22:47.876 START TEST raid_rebuild_test_io 00:22:47.876 ************************************ 00:22:47.876 06:15:18 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:47.876 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=126428 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:47.877 06:15:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126428 /var/tmp/spdk-raid.sock 00:22:47.877 06:15:18 -- common/autotest_common.sh@819 -- # '[' -z 126428 ']' 00:22:47.877 06:15:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:47.877 06:15:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:47.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:47.877 06:15:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:47.877 06:15:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:47.877 06:15:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.877 [2024-06-11 06:15:18.519629] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:22:47.877 [2024-06-11 06:15:18.519801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126428 ] 00:22:47.877 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:47.877 Zero copy mechanism will not be used. 00:22:48.136 [2024-06-11 06:15:18.682094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.395 [2024-06-11 06:15:18.928125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.654 [2024-06-11 06:15:19.154956] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:48.912 06:15:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:48.912 06:15:19 -- common/autotest_common.sh@852 -- # return 0 00:22:48.912 06:15:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:48.912 06:15:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:48.912 06:15:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:49.171 BaseBdev1 00:22:49.171 06:15:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:49.171 06:15:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:49.171 06:15:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.431 BaseBdev2 00:22:49.431 06:15:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:49.431 06:15:20 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:49.431 06:15:20 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:49.690 BaseBdev3 00:22:49.690 06:15:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:49.690 06:15:20 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:49.690 06:15:20 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:49.949 BaseBdev4 00:22:49.949 06:15:20 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:50.208 spare_malloc 00:22:50.208 06:15:20 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:50.467 spare_delay 00:22:50.467 06:15:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:50.726 [2024-06-11 06:15:21.117241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:50.726 [2024-06-11 06:15:21.117362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.726 [2024-06-11 06:15:21.117414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:22:50.726 [2024-06-11 06:15:21.117475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.726 [2024-06-11 06:15:21.120226] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.726 [2024-06-11 06:15:21.120280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:50.726 spare 00:22:50.726 06:15:21 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:50.726 [2024-06-11 06:15:21.289308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:50.726 [2024-06-11 06:15:21.291620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.726 [2024-06-11 06:15:21.291669] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:50.726 [2024-06-11 06:15:21.291699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:50.726 [2024-06-11 06:15:21.291774] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:22:50.726 [2024-06-11 06:15:21.291782] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:50.726 [2024-06-11 06:15:21.291930] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:50.726 [2024-06-11 06:15:21.292290] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:22:50.726 [2024-06-11 06:15:21.292300] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:22:50.726 [2024-06-11 06:15:21.292475] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.726 06:15:21 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.727 06:15:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.986 06:15:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:50.986 "name": "raid_bdev1", 00:22:50.986 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:50.986 "strip_size_kb": 0, 00:22:50.986 "state": "online", 00:22:50.986 "raid_level": "raid1", 00:22:50.986 "superblock": false, 00:22:50.986 "num_base_bdevs": 4, 00:22:50.986 "num_base_bdevs_discovered": 4, 00:22:50.986 "num_base_bdevs_operational": 4, 00:22:50.986 "base_bdevs_list": [ 00:22:50.986 { 00:22:50.986 "name": "BaseBdev1", 00:22:50.986 "uuid": "074d9ad3-e69b-4059-9be3-4e75175911da", 00:22:50.986 "is_configured": true, 00:22:50.986 "data_offset": 0, 00:22:50.986 "data_size": 65536 00:22:50.986 }, 00:22:50.986 { 00:22:50.986 "name": "BaseBdev2", 00:22:50.986 "uuid": "92e5b347-fca4-4d1f-8dcb-746b7b91e8d1", 00:22:50.986 "is_configured": true, 00:22:50.986 "data_offset": 0, 00:22:50.986 "data_size": 65536 00:22:50.986 }, 00:22:50.986 { 00:22:50.986 "name": "BaseBdev3", 00:22:50.986 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:50.986 "is_configured": true, 00:22:50.986 "data_offset": 0, 00:22:50.986 "data_size": 65536 00:22:50.986 }, 00:22:50.986 { 00:22:50.986 "name": "BaseBdev4", 00:22:50.986 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:50.986 "is_configured": true, 00:22:50.986 "data_offset": 0, 00:22:50.986 "data_size": 65536 00:22:50.986 } 00:22:50.986 ] 00:22:50.986 }' 00:22:50.986 06:15:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:50.986 06:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 06:15:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:51.554 06:15:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:51.813 [2024-06-11 06:15:22.257693] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:51.813 06:15:22 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:51.813 06:15:22 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.813 06:15:22 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:52.072 06:15:22 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:52.072 06:15:22 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:52.072 06:15:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:52.072 06:15:22 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:52.072 [2024-06-11 06:15:22.622570] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:52.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:52.072 Zero copy mechanism will not be used. 00:22:52.072 Running I/O for 60 seconds... 00:22:52.331 [2024-06-11 06:15:22.750026] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:52.331 [2024-06-11 06:15:22.755623] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.331 06:15:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.589 06:15:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.589 "name": "raid_bdev1", 00:22:52.589 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:52.589 "strip_size_kb": 0, 00:22:52.589 "state": "online", 00:22:52.589 "raid_level": "raid1", 00:22:52.589 "superblock": false, 00:22:52.589 "num_base_bdevs": 4, 00:22:52.589 "num_base_bdevs_discovered": 3, 00:22:52.589 "num_base_bdevs_operational": 3, 00:22:52.589 "base_bdevs_list": [ 00:22:52.589 { 00:22:52.589 "name": null, 00:22:52.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.589 "is_configured": false, 00:22:52.589 "data_offset": 0, 00:22:52.589 "data_size": 65536 00:22:52.589 }, 00:22:52.589 { 00:22:52.589 "name": "BaseBdev2", 00:22:52.589 "uuid": "92e5b347-fca4-4d1f-8dcb-746b7b91e8d1", 00:22:52.589 "is_configured": true, 00:22:52.589 "data_offset": 0, 00:22:52.589 "data_size": 65536 00:22:52.589 }, 00:22:52.589 { 00:22:52.589 "name": "BaseBdev3", 00:22:52.589 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:52.589 "is_configured": true, 00:22:52.589 "data_offset": 0, 00:22:52.589 "data_size": 65536 00:22:52.589 }, 00:22:52.589 { 00:22:52.589 "name": "BaseBdev4", 00:22:52.589 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:52.589 "is_configured": true, 00:22:52.589 "data_offset": 0, 00:22:52.589 "data_size": 65536 00:22:52.589 } 00:22:52.589 ] 00:22:52.589 }' 00:22:52.589 06:15:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.589 06:15:23 -- common/autotest_common.sh@10 -- # set +x 00:22:53.157 06:15:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.157 [2024-06-11 06:15:23.735632] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:53.157 [2024-06-11 06:15:23.735711] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.157 [2024-06-11 06:15:23.776676] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:53.157 [2024-06-11 06:15:23.779058] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.157 06:15:23 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:53.416 [2024-06-11 06:15:23.888354] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:53.416 [2024-06-11 06:15:23.890192] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:53.675 [2024-06-11 06:15:24.122446] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:53.675 [2024-06-11 06:15:24.122832] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:53.934 [2024-06-11 06:15:24.465809] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.193 06:15:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.452 [2024-06-11 06:15:24.958321] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:54.452 06:15:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.452 "name": "raid_bdev1", 00:22:54.452 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:54.452 "strip_size_kb": 0, 00:22:54.452 "state": "online", 00:22:54.452 "raid_level": "raid1", 00:22:54.452 "superblock": false, 00:22:54.452 "num_base_bdevs": 4, 00:22:54.452 "num_base_bdevs_discovered": 4, 00:22:54.452 "num_base_bdevs_operational": 4, 00:22:54.452 "process": { 00:22:54.452 "type": "rebuild", 00:22:54.452 "target": "spare", 00:22:54.452 "progress": { 00:22:54.452 "blocks": 16384, 00:22:54.452 "percent": 25 00:22:54.452 } 00:22:54.452 }, 00:22:54.452 "base_bdevs_list": [ 00:22:54.452 { 00:22:54.452 "name": "spare", 00:22:54.452 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:22:54.452 "is_configured": true, 00:22:54.452 "data_offset": 0, 00:22:54.452 "data_size": 65536 00:22:54.452 }, 00:22:54.452 { 00:22:54.452 "name": "BaseBdev2", 00:22:54.452 "uuid": "92e5b347-fca4-4d1f-8dcb-746b7b91e8d1", 00:22:54.453 "is_configured": true, 00:22:54.453 "data_offset": 0, 00:22:54.453 "data_size": 65536 00:22:54.453 }, 00:22:54.453 { 00:22:54.453 "name": "BaseBdev3", 00:22:54.453 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:54.453 "is_configured": true, 00:22:54.453 "data_offset": 0, 00:22:54.453 "data_size": 65536 00:22:54.453 }, 00:22:54.453 { 00:22:54.453 "name": "BaseBdev4", 00:22:54.453 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:54.453 "is_configured": true, 00:22:54.453 "data_offset": 0, 00:22:54.453 "data_size": 65536 00:22:54.453 } 00:22:54.453 ] 00:22:54.453 }' 00:22:54.453 06:15:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.453 06:15:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.453 06:15:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.712 06:15:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.712 06:15:25 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:54.712 [2024-06-11 06:15:25.207654] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:54.712 [2024-06-11 06:15:25.351862] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:54.971 [2024-06-11 06:15:25.361444] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:54.971 [2024-06-11 06:15:25.461389] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:54.971 [2024-06-11 06:15:25.479797] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.971 [2024-06-11 06:15:25.511098] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005a00 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.971 06:15:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.231 06:15:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.231 "name": "raid_bdev1", 00:22:55.231 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:55.231 "strip_size_kb": 0, 00:22:55.231 "state": "online", 00:22:55.231 "raid_level": "raid1", 00:22:55.231 "superblock": false, 00:22:55.231 "num_base_bdevs": 4, 00:22:55.231 "num_base_bdevs_discovered": 3, 00:22:55.231 "num_base_bdevs_operational": 3, 00:22:55.231 "base_bdevs_list": [ 00:22:55.231 { 00:22:55.231 "name": null, 00:22:55.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.231 "is_configured": false, 00:22:55.231 "data_offset": 0, 00:22:55.231 "data_size": 65536 00:22:55.231 }, 00:22:55.231 { 00:22:55.231 "name": "BaseBdev2", 00:22:55.231 "uuid": "92e5b347-fca4-4d1f-8dcb-746b7b91e8d1", 00:22:55.231 "is_configured": true, 00:22:55.231 "data_offset": 0, 00:22:55.231 "data_size": 65536 00:22:55.231 }, 00:22:55.231 { 00:22:55.231 "name": "BaseBdev3", 00:22:55.231 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:55.231 "is_configured": true, 00:22:55.231 "data_offset": 0, 00:22:55.231 "data_size": 65536 00:22:55.231 }, 00:22:55.231 { 00:22:55.231 "name": "BaseBdev4", 00:22:55.231 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:55.231 "is_configured": true, 00:22:55.231 "data_offset": 0, 00:22:55.231 "data_size": 65536 00:22:55.231 } 00:22:55.231 ] 00:22:55.231 }' 00:22:55.231 06:15:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.231 06:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.799 06:15:26 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:55.799 06:15:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.799 06:15:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:55.799 06:15:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:55.799 06:15:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:56.058 06:15:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.058 06:15:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:56.317 06:15:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.317 "name": "raid_bdev1", 00:22:56.317 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:56.317 "strip_size_kb": 0, 00:22:56.317 "state": "online", 00:22:56.317 "raid_level": "raid1", 00:22:56.317 "superblock": false, 00:22:56.317 "num_base_bdevs": 4, 00:22:56.317 "num_base_bdevs_discovered": 3, 00:22:56.317 "num_base_bdevs_operational": 3, 00:22:56.317 "base_bdevs_list": [ 00:22:56.317 { 00:22:56.317 "name": null, 00:22:56.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.317 "is_configured": false, 00:22:56.317 "data_offset": 0, 00:22:56.317 "data_size": 65536 00:22:56.317 }, 00:22:56.317 { 00:22:56.317 "name": "BaseBdev2", 00:22:56.317 "uuid": "92e5b347-fca4-4d1f-8dcb-746b7b91e8d1", 00:22:56.317 "is_configured": true, 00:22:56.317 "data_offset": 0, 00:22:56.317 "data_size": 65536 00:22:56.317 }, 00:22:56.317 { 00:22:56.317 "name": "BaseBdev3", 00:22:56.317 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:56.317 "is_configured": true, 00:22:56.317 "data_offset": 0, 00:22:56.317 "data_size": 65536 00:22:56.317 }, 00:22:56.317 { 00:22:56.317 "name": "BaseBdev4", 00:22:56.317 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:56.317 "is_configured": true, 00:22:56.317 "data_offset": 0, 00:22:56.317 "data_size": 65536 00:22:56.317 } 00:22:56.317 ] 00:22:56.317 }' 00:22:56.317 06:15:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:56.317 06:15:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:56.317 06:15:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.317 06:15:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:56.317 06:15:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:56.575 [2024-06-11 06:15:27.074058] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:56.575 [2024-06-11 06:15:27.074132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:56.575 [2024-06-11 06:15:27.132426] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:56.575 [2024-06-11 06:15:27.134821] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:56.575 06:15:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:56.834 [2024-06-11 06:15:27.245651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:56.834 [2024-06-11 06:15:27.247165] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:56.834 [2024-06-11 06:15:27.466171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:56.834 [2024-06-11 06:15:27.467113] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:57.402 [2024-06-11 06:15:27.807048] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:57.402 [2024-06-11 06:15:27.807694] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:57.403 [2024-06-11 06:15:27.943307] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.662 06:15:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.662 [2024-06-11 06:15:28.296141] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.922 "name": "raid_bdev1", 00:22:57.922 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:57.922 "strip_size_kb": 0, 00:22:57.922 "state": "online", 00:22:57.922 "raid_level": "raid1", 00:22:57.922 "superblock": false, 00:22:57.922 "num_base_bdevs": 4, 00:22:57.922 "num_base_bdevs_discovered": 4, 00:22:57.922 "num_base_bdevs_operational": 4, 00:22:57.922 "process": { 00:22:57.922 "type": "rebuild", 00:22:57.922 "target": "spare", 00:22:57.922 "progress": { 00:22:57.922 "blocks": 16384, 00:22:57.922 "percent": 25 00:22:57.922 } 00:22:57.922 }, 00:22:57.922 "base_bdevs_list": [ 00:22:57.922 { 00:22:57.922 "name": "spare", 00:22:57.922 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:22:57.922 "is_configured": true, 00:22:57.922 "data_offset": 0, 00:22:57.922 "data_size": 65536 00:22:57.922 }, 00:22:57.922 { 00:22:57.922 "name": "BaseBdev2", 00:22:57.922 "uuid": "92e5b347-fca4-4d1f-8dcb-746b7b91e8d1", 00:22:57.922 "is_configured": true, 00:22:57.922 "data_offset": 0, 00:22:57.922 "data_size": 65536 00:22:57.922 }, 00:22:57.922 { 00:22:57.922 "name": "BaseBdev3", 00:22:57.922 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:57.922 "is_configured": true, 00:22:57.922 "data_offset": 0, 00:22:57.922 "data_size": 65536 00:22:57.922 }, 00:22:57.922 { 00:22:57.922 "name": "BaseBdev4", 00:22:57.922 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:57.922 "is_configured": true, 00:22:57.922 "data_offset": 0, 00:22:57.922 "data_size": 65536 00:22:57.922 } 00:22:57.922 ] 00:22:57.922 }' 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:57.922 06:15:28 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:58.181 [2024-06-11 06:15:28.645215] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:58.181 [2024-06-11 06:15:28.679016] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:58.181 [2024-06-11 06:15:28.763683] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:58.441 [2024-06-11 06:15:28.853445] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005a00 00:22:58.441 [2024-06-11 06:15:28.853474] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.441 06:15:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.441 [2024-06-11 06:15:29.087022] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.700 "name": "raid_bdev1", 00:22:58.700 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:58.700 "strip_size_kb": 0, 00:22:58.700 "state": "online", 00:22:58.700 "raid_level": "raid1", 00:22:58.700 "superblock": false, 00:22:58.700 "num_base_bdevs": 4, 00:22:58.700 "num_base_bdevs_discovered": 3, 00:22:58.700 "num_base_bdevs_operational": 3, 00:22:58.700 "process": { 00:22:58.700 "type": "rebuild", 00:22:58.700 "target": "spare", 00:22:58.700 "progress": { 00:22:58.700 "blocks": 26624, 00:22:58.700 "percent": 40 00:22:58.700 } 00:22:58.700 }, 00:22:58.700 "base_bdevs_list": [ 00:22:58.700 { 00:22:58.700 "name": "spare", 00:22:58.700 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:22:58.700 "is_configured": true, 00:22:58.700 "data_offset": 0, 00:22:58.700 "data_size": 65536 00:22:58.700 }, 00:22:58.700 { 00:22:58.700 "name": null, 00:22:58.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.700 "is_configured": false, 00:22:58.700 "data_offset": 0, 00:22:58.700 "data_size": 65536 00:22:58.700 }, 00:22:58.700 { 00:22:58.700 "name": "BaseBdev3", 00:22:58.700 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:58.700 "is_configured": true, 00:22:58.700 "data_offset": 0, 00:22:58.700 "data_size": 65536 00:22:58.700 }, 00:22:58.700 { 00:22:58.700 "name": "BaseBdev4", 00:22:58.700 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:58.700 "is_configured": true, 00:22:58.700 "data_offset": 0, 00:22:58.700 "data_size": 65536 00:22:58.700 } 00:22:58.700 ] 00:22:58.700 }' 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@657 -- # local timeout=522 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.700 06:15:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:58.701 [2024-06-11 06:15:29.305098] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:58.960 06:15:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:58.960 "name": "raid_bdev1", 00:22:58.960 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:22:58.960 "strip_size_kb": 0, 00:22:58.960 "state": "online", 00:22:58.960 "raid_level": "raid1", 00:22:58.960 "superblock": false, 00:22:58.960 "num_base_bdevs": 4, 00:22:58.960 "num_base_bdevs_discovered": 3, 00:22:58.960 "num_base_bdevs_operational": 3, 00:22:58.960 "process": { 00:22:58.960 "type": "rebuild", 00:22:58.960 "target": "spare", 00:22:58.960 "progress": { 00:22:58.960 "blocks": 30720, 00:22:58.960 "percent": 46 00:22:58.960 } 00:22:58.960 }, 00:22:58.960 "base_bdevs_list": [ 00:22:58.960 { 00:22:58.960 "name": "spare", 00:22:58.960 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:22:58.960 "is_configured": true, 00:22:58.960 "data_offset": 0, 00:22:58.960 "data_size": 65536 00:22:58.960 }, 00:22:58.960 { 00:22:58.960 "name": null, 00:22:58.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.960 "is_configured": false, 00:22:58.960 "data_offset": 0, 00:22:58.960 "data_size": 65536 00:22:58.960 }, 00:22:58.960 { 00:22:58.960 "name": "BaseBdev3", 00:22:58.960 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:22:58.960 "is_configured": true, 00:22:58.960 "data_offset": 0, 00:22:58.960 "data_size": 65536 00:22:58.960 }, 00:22:58.960 { 00:22:58.960 "name": "BaseBdev4", 00:22:58.960 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:22:58.960 "is_configured": true, 00:22:58.960 "data_offset": 0, 00:22:58.960 "data_size": 65536 00:22:58.960 } 00:22:58.960 ] 00:22:58.960 }' 00:22:58.960 06:15:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:58.960 06:15:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:58.960 06:15:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:58.960 [2024-06-11 06:15:29.541321] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:58.960 06:15:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:58.960 06:15:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:59.219 [2024-06-11 06:15:29.764017] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.157 [2024-06-11 06:15:30.770660] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:00.157 "name": "raid_bdev1", 00:23:00.157 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:23:00.157 "strip_size_kb": 0, 00:23:00.157 "state": "online", 00:23:00.157 "raid_level": "raid1", 00:23:00.157 "superblock": false, 00:23:00.157 "num_base_bdevs": 4, 00:23:00.157 "num_base_bdevs_discovered": 3, 00:23:00.157 "num_base_bdevs_operational": 3, 00:23:00.157 "process": { 00:23:00.157 "type": "rebuild", 00:23:00.157 "target": "spare", 00:23:00.157 "progress": { 00:23:00.157 "blocks": 49152, 00:23:00.157 "percent": 75 00:23:00.157 } 00:23:00.157 }, 00:23:00.157 "base_bdevs_list": [ 00:23:00.157 { 00:23:00.157 "name": "spare", 00:23:00.157 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:23:00.157 "is_configured": true, 00:23:00.157 "data_offset": 0, 00:23:00.157 "data_size": 65536 00:23:00.157 }, 00:23:00.157 { 00:23:00.157 "name": null, 00:23:00.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.157 "is_configured": false, 00:23:00.157 "data_offset": 0, 00:23:00.157 "data_size": 65536 00:23:00.157 }, 00:23:00.157 { 00:23:00.157 "name": "BaseBdev3", 00:23:00.157 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:23:00.157 "is_configured": true, 00:23:00.157 "data_offset": 0, 00:23:00.157 "data_size": 65536 00:23:00.157 }, 00:23:00.157 { 00:23:00.157 "name": "BaseBdev4", 00:23:00.157 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:23:00.157 "is_configured": true, 00:23:00.157 "data_offset": 0, 00:23:00.157 "data_size": 65536 00:23:00.157 } 00:23:00.157 ] 00:23:00.157 }' 00:23:00.157 06:15:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:00.417 06:15:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.417 06:15:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:00.417 [2024-06-11 06:15:30.878546] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:00.417 06:15:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.417 06:15:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:01.013 [2024-06-11 06:15:31.543873] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:01.272 [2024-06-11 06:15:31.643860] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:01.272 [2024-06-11 06:15:31.646273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.272 06:15:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.531 06:15:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.531 "name": "raid_bdev1", 00:23:01.531 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:23:01.531 "strip_size_kb": 0, 00:23:01.531 "state": "online", 00:23:01.531 "raid_level": "raid1", 00:23:01.531 "superblock": false, 00:23:01.531 "num_base_bdevs": 4, 00:23:01.531 "num_base_bdevs_discovered": 3, 00:23:01.531 "num_base_bdevs_operational": 3, 00:23:01.531 "base_bdevs_list": [ 00:23:01.531 { 00:23:01.531 "name": "spare", 00:23:01.531 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:23:01.531 "is_configured": true, 00:23:01.531 "data_offset": 0, 00:23:01.531 "data_size": 65536 00:23:01.531 }, 00:23:01.531 { 00:23:01.531 "name": null, 00:23:01.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.531 "is_configured": false, 00:23:01.531 "data_offset": 0, 00:23:01.531 "data_size": 65536 00:23:01.531 }, 00:23:01.531 { 00:23:01.531 "name": "BaseBdev3", 00:23:01.531 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:23:01.531 "is_configured": true, 00:23:01.531 "data_offset": 0, 00:23:01.531 "data_size": 65536 00:23:01.531 }, 00:23:01.531 { 00:23:01.531 "name": "BaseBdev4", 00:23:01.531 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:23:01.531 "is_configured": true, 00:23:01.531 "data_offset": 0, 00:23:01.531 "data_size": 65536 00:23:01.531 } 00:23:01.531 ] 00:23:01.531 }' 00:23:01.531 06:15:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:01.531 06:15:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:01.531 06:15:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@660 -- # break 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.790 06:15:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:01.790 "name": "raid_bdev1", 00:23:01.790 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:23:01.790 "strip_size_kb": 0, 00:23:01.790 "state": "online", 00:23:01.790 "raid_level": "raid1", 00:23:01.790 "superblock": false, 00:23:01.790 "num_base_bdevs": 4, 00:23:01.790 "num_base_bdevs_discovered": 3, 00:23:01.790 "num_base_bdevs_operational": 3, 00:23:01.790 "base_bdevs_list": [ 00:23:01.790 { 00:23:01.790 "name": "spare", 00:23:01.790 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:23:01.790 "is_configured": true, 00:23:01.790 "data_offset": 0, 00:23:01.790 "data_size": 65536 00:23:01.790 }, 00:23:01.790 { 00:23:01.790 "name": null, 00:23:01.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.790 "is_configured": false, 00:23:01.790 "data_offset": 0, 00:23:01.790 "data_size": 65536 00:23:01.790 }, 00:23:01.790 { 00:23:01.790 "name": "BaseBdev3", 00:23:01.790 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:23:01.790 "is_configured": true, 00:23:01.790 "data_offset": 0, 00:23:01.790 "data_size": 65536 00:23:01.790 }, 00:23:01.790 { 00:23:01.790 "name": "BaseBdev4", 00:23:01.790 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:23:01.790 "is_configured": true, 00:23:01.790 "data_offset": 0, 00:23:01.790 "data_size": 65536 00:23:01.790 } 00:23:01.790 ] 00:23:01.790 }' 00:23:02.049 06:15:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.050 06:15:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.309 06:15:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.309 "name": "raid_bdev1", 00:23:02.309 "uuid": "e1b2940f-6760-4281-8492-715abaf474b4", 00:23:02.309 "strip_size_kb": 0, 00:23:02.309 "state": "online", 00:23:02.309 "raid_level": "raid1", 00:23:02.309 "superblock": false, 00:23:02.309 "num_base_bdevs": 4, 00:23:02.309 "num_base_bdevs_discovered": 3, 00:23:02.309 "num_base_bdevs_operational": 3, 00:23:02.309 "base_bdevs_list": [ 00:23:02.309 { 00:23:02.309 "name": "spare", 00:23:02.309 "uuid": "00eb2766-eb3f-55cf-8951-45143858d452", 00:23:02.309 "is_configured": true, 00:23:02.309 "data_offset": 0, 00:23:02.309 "data_size": 65536 00:23:02.309 }, 00:23:02.309 { 00:23:02.309 "name": null, 00:23:02.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.309 "is_configured": false, 00:23:02.309 "data_offset": 0, 00:23:02.309 "data_size": 65536 00:23:02.309 }, 00:23:02.309 { 00:23:02.309 "name": "BaseBdev3", 00:23:02.309 "uuid": "08d5effc-94d4-4ade-812e-80faa3bcbb13", 00:23:02.309 "is_configured": true, 00:23:02.309 "data_offset": 0, 00:23:02.309 "data_size": 65536 00:23:02.309 }, 00:23:02.309 { 00:23:02.309 "name": "BaseBdev4", 00:23:02.309 "uuid": "36d186f4-ccf5-416a-afa9-83f236554336", 00:23:02.309 "is_configured": true, 00:23:02.309 "data_offset": 0, 00:23:02.309 "data_size": 65536 00:23:02.309 } 00:23:02.309 ] 00:23:02.309 }' 00:23:02.309 06:15:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.309 06:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:02.877 06:15:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:03.136 [2024-06-11 06:15:33.613945] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.136 [2024-06-11 06:15:33.613998] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:03.136 00:23:03.136 Latency(us) 00:23:03.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.136 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:03.136 raid_bdev1 : 11.08 105.37 316.11 0.00 0.00 13405.47 306.22 118838.61 00:23:03.136 =================================================================================================================== 00:23:03.136 Total : 105.37 316.11 0.00 0.00 13405.47 306.22 118838.61 00:23:03.136 [2024-06-11 06:15:33.731173] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:03.136 [2024-06-11 06:15:33.731226] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.136 [2024-06-11 06:15:33.731317] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.137 [2024-06-11 06:15:33.731326] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:23:03.137 0 00:23:03.137 06:15:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.137 06:15:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:03.396 06:15:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:03.396 06:15:33 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:03.396 06:15:33 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@12 -- # local i 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.396 06:15:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:03.655 /dev/nbd0 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:03.655 06:15:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:03.655 06:15:34 -- common/autotest_common.sh@857 -- # local i 00:23:03.655 06:15:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:03.655 06:15:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:03.655 06:15:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:03.655 06:15:34 -- common/autotest_common.sh@861 -- # break 00:23:03.655 06:15:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:03.655 06:15:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:03.655 06:15:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.655 1+0 records in 00:23:03.655 1+0 records out 00:23:03.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021084 s, 19.4 MB/s 00:23:03.655 06:15:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.655 06:15:34 -- common/autotest_common.sh@874 -- # size=4096 00:23:03.655 06:15:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.655 06:15:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:03.655 06:15:34 -- common/autotest_common.sh@877 -- # return 0 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.655 06:15:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:03.655 06:15:34 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:03.655 06:15:34 -- bdev/bdev_raid.sh@678 -- # continue 00:23:03.655 06:15:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:03.655 06:15:34 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:03.655 06:15:34 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@12 -- # local i 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.655 06:15:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:03.915 /dev/nbd1 00:23:03.915 06:15:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:03.915 06:15:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:03.915 06:15:34 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:03.915 06:15:34 -- common/autotest_common.sh@857 -- # local i 00:23:03.915 06:15:34 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:03.915 06:15:34 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:03.915 06:15:34 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:03.915 06:15:34 -- common/autotest_common.sh@861 -- # break 00:23:03.915 06:15:34 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:03.915 06:15:34 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:03.915 06:15:34 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.915 1+0 records in 00:23:03.915 1+0 records out 00:23:03.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276159 s, 14.8 MB/s 00:23:03.915 06:15:34 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.915 06:15:34 -- common/autotest_common.sh@874 -- # size=4096 00:23:03.915 06:15:34 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.915 06:15:34 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:03.915 06:15:34 -- common/autotest_common.sh@877 -- # return 0 00:23:03.915 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:03.915 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:03.915 06:15:34 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:04.174 06:15:34 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:04.174 06:15:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.174 06:15:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:04.174 06:15:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.174 06:15:34 -- bdev/nbd_common.sh@51 -- # local i 00:23:04.174 06:15:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.174 06:15:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@41 -- # break 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.434 06:15:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:04.434 06:15:34 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:04.434 06:15:34 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@12 -- # local i 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.434 06:15:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:04.694 /dev/nbd1 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:04.694 06:15:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:04.694 06:15:35 -- common/autotest_common.sh@857 -- # local i 00:23:04.694 06:15:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:04.694 06:15:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:04.694 06:15:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:04.694 06:15:35 -- common/autotest_common.sh@861 -- # break 00:23:04.694 06:15:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:04.694 06:15:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:04.694 06:15:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.694 1+0 records in 00:23:04.694 1+0 records out 00:23:04.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297295 s, 13.8 MB/s 00:23:04.694 06:15:35 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.694 06:15:35 -- common/autotest_common.sh@874 -- # size=4096 00:23:04.694 06:15:35 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.694 06:15:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:04.694 06:15:35 -- common/autotest_common.sh@877 -- # return 0 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:04.694 06:15:35 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:04.694 06:15:35 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@51 -- # local i 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.694 06:15:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@41 -- # break 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.954 06:15:35 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@51 -- # local i 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.954 06:15:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@41 -- # break 00:23:05.214 06:15:35 -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.214 06:15:35 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:05.214 06:15:35 -- bdev/bdev_raid.sh@709 -- # killprocess 126428 00:23:05.214 06:15:35 -- common/autotest_common.sh@926 -- # '[' -z 126428 ']' 00:23:05.214 06:15:35 -- common/autotest_common.sh@930 -- # kill -0 126428 00:23:05.214 06:15:35 -- common/autotest_common.sh@931 -- # uname 00:23:05.214 06:15:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:05.214 06:15:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126428 00:23:05.214 06:15:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:05.214 killing process with pid 126428 00:23:05.214 06:15:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:05.214 06:15:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126428' 00:23:05.214 06:15:35 -- common/autotest_common.sh@945 -- # kill 126428 00:23:05.214 Received shutdown signal, test time was about 13.165270 seconds 00:23:05.214 00:23:05.214 Latency(us) 00:23:05.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.214 =================================================================================================================== 00:23:05.214 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.214 06:15:35 -- common/autotest_common.sh@950 -- # wait 126428 00:23:05.214 [2024-06-11 06:15:35.790454] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.783 [2024-06-11 06:15:36.234796] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:07.162 00:23:07.162 real 0m19.260s 00:23:07.162 user 0m28.464s 00:23:07.162 sys 0m3.018s 00:23:07.162 06:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.162 06:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.162 ************************************ 00:23:07.162 END TEST raid_rebuild_test_io 00:23:07.162 ************************************ 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:23:07.162 06:15:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:07.162 06:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:07.162 06:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.162 ************************************ 00:23:07.162 START TEST raid_rebuild_test_sb_io 00:23:07.162 ************************************ 00:23:07.162 06:15:37 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@544 -- # raid_pid=126939 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126939 /var/tmp/spdk-raid.sock 00:23:07.162 06:15:37 -- common/autotest_common.sh@819 -- # '[' -z 126939 ']' 00:23:07.162 06:15:37 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:07.162 06:15:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:07.162 06:15:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:07.162 06:15:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:07.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:07.162 06:15:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:07.162 06:15:37 -- common/autotest_common.sh@10 -- # set +x 00:23:07.421 [2024-06-11 06:15:37.872278] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:07.421 [2024-06-11 06:15:37.873069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126939 ] 00:23:07.421 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:07.421 Zero copy mechanism will not be used. 00:23:07.421 [2024-06-11 06:15:38.054901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.681 [2024-06-11 06:15:38.285934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.940 [2024-06-11 06:15:38.512133] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:08.199 06:15:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:08.199 06:15:38 -- common/autotest_common.sh@852 -- # return 0 00:23:08.199 06:15:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:08.199 06:15:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:08.199 06:15:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:08.458 BaseBdev1_malloc 00:23:08.458 06:15:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:08.718 [2024-06-11 06:15:39.150871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:08.718 [2024-06-11 06:15:39.150978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.718 [2024-06-11 06:15:39.151025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:08.718 [2024-06-11 06:15:39.151081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.718 [2024-06-11 06:15:39.153824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.718 [2024-06-11 06:15:39.153873] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:08.718 BaseBdev1 00:23:08.718 06:15:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:08.718 06:15:39 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:08.718 06:15:39 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:08.977 BaseBdev2_malloc 00:23:08.977 06:15:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:09.236 [2024-06-11 06:15:39.641119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:09.236 [2024-06-11 06:15:39.641242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.236 [2024-06-11 06:15:39.641292] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:09.236 [2024-06-11 06:15:39.641356] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.236 [2024-06-11 06:15:39.644012] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.236 [2024-06-11 06:15:39.644062] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:09.236 BaseBdev2 00:23:09.236 06:15:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:09.236 06:15:39 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:09.236 06:15:39 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:09.236 BaseBdev3_malloc 00:23:09.236 06:15:39 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:09.495 [2024-06-11 06:15:40.093864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:09.495 [2024-06-11 06:15:40.093989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.495 [2024-06-11 06:15:40.094038] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:09.495 [2024-06-11 06:15:40.094085] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.495 [2024-06-11 06:15:40.096753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.495 [2024-06-11 06:15:40.096821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:09.495 BaseBdev3 00:23:09.495 06:15:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:09.495 06:15:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:09.495 06:15:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:09.754 BaseBdev4_malloc 00:23:09.754 06:15:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:10.013 [2024-06-11 06:15:40.486779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:10.013 [2024-06-11 06:15:40.486911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.013 [2024-06-11 06:15:40.486957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:10.013 [2024-06-11 06:15:40.487007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.013 [2024-06-11 06:15:40.489700] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.013 [2024-06-11 06:15:40.489775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:10.013 BaseBdev4 00:23:10.013 06:15:40 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:10.273 spare_malloc 00:23:10.273 06:15:40 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:10.273 spare_delay 00:23:10.273 06:15:40 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:10.532 [2024-06-11 06:15:41.063375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:10.532 [2024-06-11 06:15:41.063507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.532 [2024-06-11 06:15:41.063547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:10.532 [2024-06-11 06:15:41.063604] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.532 [2024-06-11 06:15:41.066325] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.532 [2024-06-11 06:15:41.066405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:10.532 spare 00:23:10.532 06:15:41 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:10.791 [2024-06-11 06:15:41.235521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.791 [2024-06-11 06:15:41.237828] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:10.791 [2024-06-11 06:15:41.237915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:10.791 [2024-06-11 06:15:41.237962] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:10.791 [2024-06-11 06:15:41.238182] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:23:10.791 [2024-06-11 06:15:41.238192] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:10.792 [2024-06-11 06:15:41.238346] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:10.792 [2024-06-11 06:15:41.238738] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:23:10.792 [2024-06-11 06:15:41.238757] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:23:10.792 [2024-06-11 06:15:41.238922] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.792 06:15:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.051 06:15:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.051 "name": "raid_bdev1", 00:23:11.051 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:11.051 "strip_size_kb": 0, 00:23:11.051 "state": "online", 00:23:11.051 "raid_level": "raid1", 00:23:11.051 "superblock": true, 00:23:11.051 "num_base_bdevs": 4, 00:23:11.051 "num_base_bdevs_discovered": 4, 00:23:11.051 "num_base_bdevs_operational": 4, 00:23:11.051 "base_bdevs_list": [ 00:23:11.051 { 00:23:11.051 "name": "BaseBdev1", 00:23:11.051 "uuid": "d587cdc5-5b99-5b6d-8507-7f05cfa84615", 00:23:11.051 "is_configured": true, 00:23:11.051 "data_offset": 2048, 00:23:11.051 "data_size": 63488 00:23:11.051 }, 00:23:11.051 { 00:23:11.051 "name": "BaseBdev2", 00:23:11.051 "uuid": "2cce1add-e170-5293-85b8-7e6e4ee61054", 00:23:11.051 "is_configured": true, 00:23:11.051 "data_offset": 2048, 00:23:11.051 "data_size": 63488 00:23:11.051 }, 00:23:11.051 { 00:23:11.051 "name": "BaseBdev3", 00:23:11.051 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:11.051 "is_configured": true, 00:23:11.051 "data_offset": 2048, 00:23:11.051 "data_size": 63488 00:23:11.051 }, 00:23:11.051 { 00:23:11.051 "name": "BaseBdev4", 00:23:11.051 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:11.051 "is_configured": true, 00:23:11.051 "data_offset": 2048, 00:23:11.051 "data_size": 63488 00:23:11.051 } 00:23:11.051 ] 00:23:11.051 }' 00:23:11.051 06:15:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.051 06:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.632 06:15:42 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:11.632 06:15:42 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:11.632 [2024-06-11 06:15:42.251830] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.632 06:15:42 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:23:11.900 06:15:42 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.900 06:15:42 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:11.900 06:15:42 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:11.900 06:15:42 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:23:11.900 06:15:42 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:11.900 06:15:42 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:12.159 [2024-06-11 06:15:42.640658] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:23:12.159 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:12.159 Zero copy mechanism will not be used. 00:23:12.159 Running I/O for 60 seconds... 00:23:12.159 [2024-06-11 06:15:42.689535] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:12.159 [2024-06-11 06:15:42.695236] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.159 06:15:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.417 06:15:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.417 "name": "raid_bdev1", 00:23:12.417 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:12.417 "strip_size_kb": 0, 00:23:12.417 "state": "online", 00:23:12.417 "raid_level": "raid1", 00:23:12.417 "superblock": true, 00:23:12.417 "num_base_bdevs": 4, 00:23:12.417 "num_base_bdevs_discovered": 3, 00:23:12.417 "num_base_bdevs_operational": 3, 00:23:12.417 "base_bdevs_list": [ 00:23:12.417 { 00:23:12.417 "name": null, 00:23:12.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.417 "is_configured": false, 00:23:12.417 "data_offset": 2048, 00:23:12.417 "data_size": 63488 00:23:12.417 }, 00:23:12.417 { 00:23:12.417 "name": "BaseBdev2", 00:23:12.417 "uuid": "2cce1add-e170-5293-85b8-7e6e4ee61054", 00:23:12.417 "is_configured": true, 00:23:12.417 "data_offset": 2048, 00:23:12.417 "data_size": 63488 00:23:12.417 }, 00:23:12.417 { 00:23:12.417 "name": "BaseBdev3", 00:23:12.417 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:12.417 "is_configured": true, 00:23:12.417 "data_offset": 2048, 00:23:12.417 "data_size": 63488 00:23:12.417 }, 00:23:12.417 { 00:23:12.417 "name": "BaseBdev4", 00:23:12.417 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:12.417 "is_configured": true, 00:23:12.417 "data_offset": 2048, 00:23:12.417 "data_size": 63488 00:23:12.417 } 00:23:12.417 ] 00:23:12.417 }' 00:23:12.417 06:15:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.417 06:15:42 -- common/autotest_common.sh@10 -- # set +x 00:23:13.011 06:15:43 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:13.271 [2024-06-11 06:15:43.706857] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:13.271 [2024-06-11 06:15:43.706934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:13.271 [2024-06-11 06:15:43.741619] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:13.271 [2024-06-11 06:15:43.744038] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:13.271 06:15:43 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:13.271 [2024-06-11 06:15:43.859611] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:13.271 [2024-06-11 06:15:43.860273] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:13.531 [2024-06-11 06:15:44.081006] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:13.531 [2024-06-11 06:15:44.081383] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:13.790 [2024-06-11 06:15:44.426501] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:23:14.050 [2024-06-11 06:15:44.551684] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:14.050 [2024-06-11 06:15:44.552050] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.309 06:15:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:14.309 [2024-06-11 06:15:44.929939] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:14.309 [2024-06-11 06:15:44.931684] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:14.568 06:15:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:14.568 "name": "raid_bdev1", 00:23:14.568 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:14.568 "strip_size_kb": 0, 00:23:14.568 "state": "online", 00:23:14.568 "raid_level": "raid1", 00:23:14.568 "superblock": true, 00:23:14.568 "num_base_bdevs": 4, 00:23:14.568 "num_base_bdevs_discovered": 4, 00:23:14.568 "num_base_bdevs_operational": 4, 00:23:14.568 "process": { 00:23:14.568 "type": "rebuild", 00:23:14.568 "target": "spare", 00:23:14.568 "progress": { 00:23:14.568 "blocks": 14336, 00:23:14.568 "percent": 22 00:23:14.568 } 00:23:14.568 }, 00:23:14.568 "base_bdevs_list": [ 00:23:14.568 { 00:23:14.568 "name": "spare", 00:23:14.568 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:14.568 "is_configured": true, 00:23:14.568 "data_offset": 2048, 00:23:14.568 "data_size": 63488 00:23:14.568 }, 00:23:14.568 { 00:23:14.568 "name": "BaseBdev2", 00:23:14.568 "uuid": "2cce1add-e170-5293-85b8-7e6e4ee61054", 00:23:14.568 "is_configured": true, 00:23:14.568 "data_offset": 2048, 00:23:14.568 "data_size": 63488 00:23:14.568 }, 00:23:14.568 { 00:23:14.568 "name": "BaseBdev3", 00:23:14.568 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:14.568 "is_configured": true, 00:23:14.568 "data_offset": 2048, 00:23:14.568 "data_size": 63488 00:23:14.569 }, 00:23:14.569 { 00:23:14.569 "name": "BaseBdev4", 00:23:14.569 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:14.569 "is_configured": true, 00:23:14.569 "data_offset": 2048, 00:23:14.569 "data_size": 63488 00:23:14.569 } 00:23:14.569 ] 00:23:14.569 }' 00:23:14.569 06:15:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:14.569 06:15:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:14.569 06:15:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:14.569 06:15:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:14.569 06:15:45 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:14.828 [2024-06-11 06:15:45.344698] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:14.828 [2024-06-11 06:15:45.364302] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:14.828 [2024-06-11 06:15:45.473040] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:15.087 [2024-06-11 06:15:45.485014] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.087 [2024-06-11 06:15:45.517067] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.087 06:15:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.346 06:15:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.346 "name": "raid_bdev1", 00:23:15.346 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:15.346 "strip_size_kb": 0, 00:23:15.346 "state": "online", 00:23:15.346 "raid_level": "raid1", 00:23:15.346 "superblock": true, 00:23:15.346 "num_base_bdevs": 4, 00:23:15.346 "num_base_bdevs_discovered": 3, 00:23:15.346 "num_base_bdevs_operational": 3, 00:23:15.346 "base_bdevs_list": [ 00:23:15.346 { 00:23:15.346 "name": null, 00:23:15.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.346 "is_configured": false, 00:23:15.346 "data_offset": 2048, 00:23:15.346 "data_size": 63488 00:23:15.346 }, 00:23:15.346 { 00:23:15.346 "name": "BaseBdev2", 00:23:15.346 "uuid": "2cce1add-e170-5293-85b8-7e6e4ee61054", 00:23:15.346 "is_configured": true, 00:23:15.346 "data_offset": 2048, 00:23:15.346 "data_size": 63488 00:23:15.346 }, 00:23:15.346 { 00:23:15.346 "name": "BaseBdev3", 00:23:15.346 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:15.346 "is_configured": true, 00:23:15.346 "data_offset": 2048, 00:23:15.346 "data_size": 63488 00:23:15.346 }, 00:23:15.346 { 00:23:15.346 "name": "BaseBdev4", 00:23:15.346 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:15.346 "is_configured": true, 00:23:15.346 "data_offset": 2048, 00:23:15.346 "data_size": 63488 00:23:15.346 } 00:23:15.346 ] 00:23:15.346 }' 00:23:15.346 06:15:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.346 06:15:45 -- common/autotest_common.sh@10 -- # set +x 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.914 06:15:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.173 06:15:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:16.173 "name": "raid_bdev1", 00:23:16.174 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:16.174 "strip_size_kb": 0, 00:23:16.174 "state": "online", 00:23:16.174 "raid_level": "raid1", 00:23:16.174 "superblock": true, 00:23:16.174 "num_base_bdevs": 4, 00:23:16.174 "num_base_bdevs_discovered": 3, 00:23:16.174 "num_base_bdevs_operational": 3, 00:23:16.174 "base_bdevs_list": [ 00:23:16.174 { 00:23:16.174 "name": null, 00:23:16.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.174 "is_configured": false, 00:23:16.174 "data_offset": 2048, 00:23:16.174 "data_size": 63488 00:23:16.174 }, 00:23:16.174 { 00:23:16.174 "name": "BaseBdev2", 00:23:16.174 "uuid": "2cce1add-e170-5293-85b8-7e6e4ee61054", 00:23:16.174 "is_configured": true, 00:23:16.174 "data_offset": 2048, 00:23:16.174 "data_size": 63488 00:23:16.174 }, 00:23:16.174 { 00:23:16.174 "name": "BaseBdev3", 00:23:16.174 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:16.174 "is_configured": true, 00:23:16.174 "data_offset": 2048, 00:23:16.174 "data_size": 63488 00:23:16.174 }, 00:23:16.174 { 00:23:16.174 "name": "BaseBdev4", 00:23:16.174 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:16.174 "is_configured": true, 00:23:16.174 "data_offset": 2048, 00:23:16.174 "data_size": 63488 00:23:16.174 } 00:23:16.174 ] 00:23:16.174 }' 00:23:16.174 06:15:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:16.174 06:15:46 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:16.174 06:15:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:16.174 06:15:46 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:16.174 06:15:46 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:16.433 [2024-06-11 06:15:46.978481] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:16.433 [2024-06-11 06:15:46.978545] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:16.433 06:15:47 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:16.433 [2024-06-11 06:15:47.033152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:23:16.433 [2024-06-11 06:15:47.035574] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:16.693 [2024-06-11 06:15:47.158893] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:16.693 [2024-06-11 06:15:47.160663] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:23:16.952 [2024-06-11 06:15:47.373770] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:16.952 [2024-06-11 06:15:47.374684] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:23:17.520 [2024-06-11 06:15:47.890307] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:17.520 [2024-06-11 06:15:47.891282] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.520 06:15:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:17.780 [2024-06-11 06:15:48.238495] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:17.780 "name": "raid_bdev1", 00:23:17.780 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:17.780 "strip_size_kb": 0, 00:23:17.780 "state": "online", 00:23:17.780 "raid_level": "raid1", 00:23:17.780 "superblock": true, 00:23:17.780 "num_base_bdevs": 4, 00:23:17.780 "num_base_bdevs_discovered": 4, 00:23:17.780 "num_base_bdevs_operational": 4, 00:23:17.780 "process": { 00:23:17.780 "type": "rebuild", 00:23:17.780 "target": "spare", 00:23:17.780 "progress": { 00:23:17.780 "blocks": 14336, 00:23:17.780 "percent": 22 00:23:17.780 } 00:23:17.780 }, 00:23:17.780 "base_bdevs_list": [ 00:23:17.780 { 00:23:17.780 "name": "spare", 00:23:17.780 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:17.780 "is_configured": true, 00:23:17.780 "data_offset": 2048, 00:23:17.780 "data_size": 63488 00:23:17.780 }, 00:23:17.780 { 00:23:17.780 "name": "BaseBdev2", 00:23:17.780 "uuid": "2cce1add-e170-5293-85b8-7e6e4ee61054", 00:23:17.780 "is_configured": true, 00:23:17.780 "data_offset": 2048, 00:23:17.780 "data_size": 63488 00:23:17.780 }, 00:23:17.780 { 00:23:17.780 "name": "BaseBdev3", 00:23:17.780 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:17.780 "is_configured": true, 00:23:17.780 "data_offset": 2048, 00:23:17.780 "data_size": 63488 00:23:17.780 }, 00:23:17.780 { 00:23:17.780 "name": "BaseBdev4", 00:23:17.780 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:17.780 "is_configured": true, 00:23:17.780 "data_offset": 2048, 00:23:17.780 "data_size": 63488 00:23:17.780 } 00:23:17.780 ] 00:23:17.780 }' 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:17.780 [2024-06-11 06:15:48.370828] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:17.780 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:23:17.780 06:15:48 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:18.040 [2024-06-11 06:15:48.606221] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:18.040 [2024-06-11 06:15:48.642117] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005d40 00:23:18.040 [2024-06-11 06:15:48.642151] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005fb0 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.299 06:15:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.299 [2024-06-11 06:15:48.781502] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:23:18.560 [2024-06-11 06:15:49.000010] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.560 "name": "raid_bdev1", 00:23:18.560 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:18.560 "strip_size_kb": 0, 00:23:18.560 "state": "online", 00:23:18.560 "raid_level": "raid1", 00:23:18.560 "superblock": true, 00:23:18.560 "num_base_bdevs": 4, 00:23:18.560 "num_base_bdevs_discovered": 3, 00:23:18.560 "num_base_bdevs_operational": 3, 00:23:18.560 "process": { 00:23:18.560 "type": "rebuild", 00:23:18.560 "target": "spare", 00:23:18.560 "progress": { 00:23:18.560 "blocks": 20480, 00:23:18.560 "percent": 32 00:23:18.560 } 00:23:18.560 }, 00:23:18.560 "base_bdevs_list": [ 00:23:18.560 { 00:23:18.560 "name": "spare", 00:23:18.560 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:18.560 "is_configured": true, 00:23:18.560 "data_offset": 2048, 00:23:18.560 "data_size": 63488 00:23:18.560 }, 00:23:18.560 { 00:23:18.560 "name": null, 00:23:18.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.560 "is_configured": false, 00:23:18.560 "data_offset": 2048, 00:23:18.560 "data_size": 63488 00:23:18.560 }, 00:23:18.560 { 00:23:18.560 "name": "BaseBdev3", 00:23:18.560 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:18.560 "is_configured": true, 00:23:18.560 "data_offset": 2048, 00:23:18.560 "data_size": 63488 00:23:18.560 }, 00:23:18.560 { 00:23:18.560 "name": "BaseBdev4", 00:23:18.560 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:18.560 "is_configured": true, 00:23:18.560 "data_offset": 2048, 00:23:18.560 "data_size": 63488 00:23:18.560 } 00:23:18.560 ] 00:23:18.560 }' 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@657 -- # local timeout=542 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.560 06:15:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.819 [2024-06-11 06:15:49.230416] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:18.819 [2024-06-11 06:15:49.231094] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:23:18.819 06:15:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:18.819 "name": "raid_bdev1", 00:23:18.819 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:18.819 "strip_size_kb": 0, 00:23:18.819 "state": "online", 00:23:18.819 "raid_level": "raid1", 00:23:18.819 "superblock": true, 00:23:18.819 "num_base_bdevs": 4, 00:23:18.819 "num_base_bdevs_discovered": 3, 00:23:18.819 "num_base_bdevs_operational": 3, 00:23:18.819 "process": { 00:23:18.819 "type": "rebuild", 00:23:18.819 "target": "spare", 00:23:18.819 "progress": { 00:23:18.820 "blocks": 26624, 00:23:18.820 "percent": 41 00:23:18.820 } 00:23:18.820 }, 00:23:18.820 "base_bdevs_list": [ 00:23:18.820 { 00:23:18.820 "name": "spare", 00:23:18.820 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:18.820 "is_configured": true, 00:23:18.820 "data_offset": 2048, 00:23:18.820 "data_size": 63488 00:23:18.820 }, 00:23:18.820 { 00:23:18.820 "name": null, 00:23:18.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.820 "is_configured": false, 00:23:18.820 "data_offset": 2048, 00:23:18.820 "data_size": 63488 00:23:18.820 }, 00:23:18.820 { 00:23:18.820 "name": "BaseBdev3", 00:23:18.820 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:18.820 "is_configured": true, 00:23:18.820 "data_offset": 2048, 00:23:18.820 "data_size": 63488 00:23:18.820 }, 00:23:18.820 { 00:23:18.820 "name": "BaseBdev4", 00:23:18.820 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:18.820 "is_configured": true, 00:23:18.820 "data_offset": 2048, 00:23:18.820 "data_size": 63488 00:23:18.820 } 00:23:18.820 ] 00:23:18.820 }' 00:23:18.820 [2024-06-11 06:15:49.349231] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:18.820 [2024-06-11 06:15:49.349618] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:23:18.820 06:15:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:18.820 06:15:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:18.820 06:15:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:18.820 06:15:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:18.820 06:15:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:19.079 [2024-06-11 06:15:49.670300] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:19.079 [2024-06-11 06:15:49.671011] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:23:19.337 [2024-06-11 06:15:49.890477] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:23:19.905 06:15:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.906 06:15:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.165 06:15:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:20.165 "name": "raid_bdev1", 00:23:20.165 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:20.165 "strip_size_kb": 0, 00:23:20.165 "state": "online", 00:23:20.165 "raid_level": "raid1", 00:23:20.165 "superblock": true, 00:23:20.165 "num_base_bdevs": 4, 00:23:20.165 "num_base_bdevs_discovered": 3, 00:23:20.165 "num_base_bdevs_operational": 3, 00:23:20.165 "process": { 00:23:20.165 "type": "rebuild", 00:23:20.165 "target": "spare", 00:23:20.165 "progress": { 00:23:20.165 "blocks": 45056, 00:23:20.165 "percent": 70 00:23:20.165 } 00:23:20.165 }, 00:23:20.165 "base_bdevs_list": [ 00:23:20.165 { 00:23:20.165 "name": "spare", 00:23:20.165 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:20.165 "is_configured": true, 00:23:20.165 "data_offset": 2048, 00:23:20.165 "data_size": 63488 00:23:20.165 }, 00:23:20.165 { 00:23:20.165 "name": null, 00:23:20.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.165 "is_configured": false, 00:23:20.165 "data_offset": 2048, 00:23:20.165 "data_size": 63488 00:23:20.165 }, 00:23:20.165 { 00:23:20.165 "name": "BaseBdev3", 00:23:20.165 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:20.165 "is_configured": true, 00:23:20.165 "data_offset": 2048, 00:23:20.165 "data_size": 63488 00:23:20.165 }, 00:23:20.165 { 00:23:20.165 "name": "BaseBdev4", 00:23:20.165 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:20.165 "is_configured": true, 00:23:20.165 "data_offset": 2048, 00:23:20.165 "data_size": 63488 00:23:20.165 } 00:23:20.165 ] 00:23:20.165 }' 00:23:20.165 06:15:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:20.165 06:15:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:20.165 06:15:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:20.165 06:15:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:20.165 06:15:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:20.424 [2024-06-11 06:15:51.000291] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:23:20.990 [2024-06-11 06:15:51.331208] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:23:20.990 [2024-06-11 06:15:51.540692] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:23:21.248 [2024-06-11 06:15:51.780277] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.248 06:15:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.248 [2024-06-11 06:15:51.880360] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:21.248 [2024-06-11 06:15:51.883940] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:21.506 "name": "raid_bdev1", 00:23:21.506 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:21.506 "strip_size_kb": 0, 00:23:21.506 "state": "online", 00:23:21.506 "raid_level": "raid1", 00:23:21.506 "superblock": true, 00:23:21.506 "num_base_bdevs": 4, 00:23:21.506 "num_base_bdevs_discovered": 3, 00:23:21.506 "num_base_bdevs_operational": 3, 00:23:21.506 "base_bdevs_list": [ 00:23:21.506 { 00:23:21.506 "name": "spare", 00:23:21.506 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:21.506 "is_configured": true, 00:23:21.506 "data_offset": 2048, 00:23:21.506 "data_size": 63488 00:23:21.506 }, 00:23:21.506 { 00:23:21.506 "name": null, 00:23:21.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.506 "is_configured": false, 00:23:21.506 "data_offset": 2048, 00:23:21.506 "data_size": 63488 00:23:21.506 }, 00:23:21.506 { 00:23:21.506 "name": "BaseBdev3", 00:23:21.506 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:21.506 "is_configured": true, 00:23:21.506 "data_offset": 2048, 00:23:21.506 "data_size": 63488 00:23:21.506 }, 00:23:21.506 { 00:23:21.506 "name": "BaseBdev4", 00:23:21.506 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:21.506 "is_configured": true, 00:23:21.506 "data_offset": 2048, 00:23:21.506 "data_size": 63488 00:23:21.506 } 00:23:21.506 ] 00:23:21.506 }' 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@660 -- # break 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.506 06:15:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.764 06:15:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:21.764 "name": "raid_bdev1", 00:23:21.764 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:21.764 "strip_size_kb": 0, 00:23:21.764 "state": "online", 00:23:21.764 "raid_level": "raid1", 00:23:21.764 "superblock": true, 00:23:21.764 "num_base_bdevs": 4, 00:23:21.764 "num_base_bdevs_discovered": 3, 00:23:21.764 "num_base_bdevs_operational": 3, 00:23:21.764 "base_bdevs_list": [ 00:23:21.764 { 00:23:21.764 "name": "spare", 00:23:21.764 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:21.764 "is_configured": true, 00:23:21.764 "data_offset": 2048, 00:23:21.764 "data_size": 63488 00:23:21.764 }, 00:23:21.764 { 00:23:21.764 "name": null, 00:23:21.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.764 "is_configured": false, 00:23:21.764 "data_offset": 2048, 00:23:21.764 "data_size": 63488 00:23:21.764 }, 00:23:21.764 { 00:23:21.764 "name": "BaseBdev3", 00:23:21.764 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:21.764 "is_configured": true, 00:23:21.764 "data_offset": 2048, 00:23:21.764 "data_size": 63488 00:23:21.764 }, 00:23:21.764 { 00:23:21.764 "name": "BaseBdev4", 00:23:21.764 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:21.764 "is_configured": true, 00:23:21.764 "data_offset": 2048, 00:23:21.764 "data_size": 63488 00:23:21.764 } 00:23:21.764 ] 00:23:21.764 }' 00:23:21.764 06:15:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.021 06:15:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.280 06:15:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.280 "name": "raid_bdev1", 00:23:22.280 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:22.280 "strip_size_kb": 0, 00:23:22.280 "state": "online", 00:23:22.280 "raid_level": "raid1", 00:23:22.280 "superblock": true, 00:23:22.280 "num_base_bdevs": 4, 00:23:22.280 "num_base_bdevs_discovered": 3, 00:23:22.280 "num_base_bdevs_operational": 3, 00:23:22.280 "base_bdevs_list": [ 00:23:22.280 { 00:23:22.280 "name": "spare", 00:23:22.280 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:22.280 "is_configured": true, 00:23:22.280 "data_offset": 2048, 00:23:22.280 "data_size": 63488 00:23:22.280 }, 00:23:22.280 { 00:23:22.280 "name": null, 00:23:22.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.280 "is_configured": false, 00:23:22.280 "data_offset": 2048, 00:23:22.280 "data_size": 63488 00:23:22.280 }, 00:23:22.280 { 00:23:22.280 "name": "BaseBdev3", 00:23:22.280 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:22.280 "is_configured": true, 00:23:22.280 "data_offset": 2048, 00:23:22.280 "data_size": 63488 00:23:22.280 }, 00:23:22.280 { 00:23:22.280 "name": "BaseBdev4", 00:23:22.280 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:22.280 "is_configured": true, 00:23:22.280 "data_offset": 2048, 00:23:22.280 "data_size": 63488 00:23:22.280 } 00:23:22.280 ] 00:23:22.280 }' 00:23:22.280 06:15:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.281 06:15:52 -- common/autotest_common.sh@10 -- # set +x 00:23:22.848 06:15:53 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.848 [2024-06-11 06:15:53.407503] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.848 [2024-06-11 06:15:53.407547] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.107 00:23:23.107 Latency(us) 00:23:23.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.107 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:23.107 raid_bdev1 : 10.86 95.45 286.35 0.00 0.00 14622.10 347.18 113346.07 00:23:23.107 =================================================================================================================== 00:23:23.107 Total : 95.45 286.35 0.00 0.00 14622.10 347.18 113346.07 00:23:23.107 [2024-06-11 06:15:53.529545] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.107 [2024-06-11 06:15:53.529612] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.107 [2024-06-11 06:15:53.529724] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.107 [2024-06-11 06:15:53.529734] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:23:23.107 0 00:23:23.107 06:15:53 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.107 06:15:53 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:23.107 06:15:53 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:23.107 06:15:53 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:23.107 06:15:53 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@12 -- # local i 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:23.107 06:15:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:23.367 /dev/nbd0 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:23.626 06:15:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:23.626 06:15:54 -- common/autotest_common.sh@857 -- # local i 00:23:23.626 06:15:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:23.626 06:15:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:23.626 06:15:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:23.626 06:15:54 -- common/autotest_common.sh@861 -- # break 00:23:23.626 06:15:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:23.626 06:15:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:23.626 06:15:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.626 1+0 records in 00:23:23.626 1+0 records out 00:23:23.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374567 s, 10.9 MB/s 00:23:23.626 06:15:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.626 06:15:54 -- common/autotest_common.sh@874 -- # size=4096 00:23:23.626 06:15:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.626 06:15:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:23.626 06:15:54 -- common/autotest_common.sh@877 -- # return 0 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:23.626 06:15:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:23.626 06:15:54 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:23.626 06:15:54 -- bdev/bdev_raid.sh@678 -- # continue 00:23:23.626 06:15:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:23.626 06:15:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:23.626 06:15:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@12 -- # local i 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:23.626 06:15:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:23.906 /dev/nbd1 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:23.906 06:15:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:23.906 06:15:54 -- common/autotest_common.sh@857 -- # local i 00:23:23.906 06:15:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:23.906 06:15:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:23.906 06:15:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:23.906 06:15:54 -- common/autotest_common.sh@861 -- # break 00:23:23.906 06:15:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:23.906 06:15:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:23.906 06:15:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:23.906 1+0 records in 00:23:23.906 1+0 records out 00:23:23.906 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529334 s, 7.7 MB/s 00:23:23.906 06:15:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.906 06:15:54 -- common/autotest_common.sh@874 -- # size=4096 00:23:23.906 06:15:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:23.906 06:15:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:23.906 06:15:54 -- common/autotest_common.sh@877 -- # return 0 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:23.906 06:15:54 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:23.906 06:15:54 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@51 -- # local i 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:23.906 06:15:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@41 -- # break 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@45 -- # return 0 00:23:24.174 06:15:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:24.174 06:15:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:24.174 06:15:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@12 -- # local i 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.174 06:15:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:24.449 /dev/nbd1 00:23:24.449 06:15:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:24.449 06:15:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:24.449 06:15:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:24.449 06:15:55 -- common/autotest_common.sh@857 -- # local i 00:23:24.449 06:15:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:24.449 06:15:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:24.449 06:15:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:24.449 06:15:55 -- common/autotest_common.sh@861 -- # break 00:23:24.449 06:15:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:24.449 06:15:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:24.449 06:15:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:24.449 1+0 records in 00:23:24.449 1+0 records out 00:23:24.449 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378353 s, 10.8 MB/s 00:23:24.449 06:15:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:24.708 06:15:55 -- common/autotest_common.sh@874 -- # size=4096 00:23:24.708 06:15:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:24.708 06:15:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:24.708 06:15:55 -- common/autotest_common.sh@877 -- # return 0 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:24.708 06:15:55 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:24.708 06:15:55 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@51 -- # local i 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:24.708 06:15:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@41 -- # break 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@45 -- # return 0 00:23:24.967 06:15:55 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@51 -- # local i 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:24.967 06:15:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@41 -- # break 00:23:25.226 06:15:55 -- bdev/nbd_common.sh@45 -- # return 0 00:23:25.226 06:15:55 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:25.226 06:15:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:25.226 06:15:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:25.226 06:15:55 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:25.485 06:15:55 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:25.485 [2024-06-11 06:15:56.108089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:25.485 [2024-06-11 06:15:56.108215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.485 [2024-06-11 06:15:56.108259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:25.485 [2024-06-11 06:15:56.108282] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.485 [2024-06-11 06:15:56.111008] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.485 [2024-06-11 06:15:56.111075] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:25.485 [2024-06-11 06:15:56.111193] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:25.485 [2024-06-11 06:15:56.111254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:25.485 BaseBdev1 00:23:25.485 06:15:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:25.485 06:15:56 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:25.485 06:15:56 -- bdev/bdev_raid.sh@696 -- # continue 00:23:25.485 06:15:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:25.485 06:15:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:25.485 06:15:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:25.743 06:15:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:26.003 [2024-06-11 06:15:56.456195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:26.003 [2024-06-11 06:15:56.456296] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.003 [2024-06-11 06:15:56.456340] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:23:26.003 [2024-06-11 06:15:56.456362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.003 [2024-06-11 06:15:56.456879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.003 [2024-06-11 06:15:56.456946] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:26.003 [2024-06-11 06:15:56.457050] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:26.003 [2024-06-11 06:15:56.457063] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:26.003 [2024-06-11 06:15:56.457070] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:26.003 [2024-06-11 06:15:56.457089] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:23:26.003 [2024-06-11 06:15:56.457168] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:26.003 BaseBdev3 00:23:26.003 06:15:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:26.003 06:15:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:26.003 06:15:56 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:26.003 06:15:56 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:26.261 [2024-06-11 06:15:56.788274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:26.261 [2024-06-11 06:15:56.788375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.261 [2024-06-11 06:15:56.788429] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:23:26.261 [2024-06-11 06:15:56.788465] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.261 [2024-06-11 06:15:56.788983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.261 [2024-06-11 06:15:56.789039] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:26.261 [2024-06-11 06:15:56.789157] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:26.261 [2024-06-11 06:15:56.789186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:26.261 BaseBdev4 00:23:26.262 06:15:56 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:26.521 06:15:56 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:26.780 [2024-06-11 06:15:57.212435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:26.780 [2024-06-11 06:15:57.212546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.780 [2024-06-11 06:15:57.212583] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:26.780 [2024-06-11 06:15:57.212611] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.780 [2024-06-11 06:15:57.213186] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.780 [2024-06-11 06:15:57.213252] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:26.780 [2024-06-11 06:15:57.213380] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:26.780 [2024-06-11 06:15:57.213417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:26.780 spare 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.780 06:15:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.780 [2024-06-11 06:15:57.313528] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:23:26.780 [2024-06-11 06:15:57.313552] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:26.780 [2024-06-11 06:15:57.313755] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:23:26.780 [2024-06-11 06:15:57.314150] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:23:26.780 [2024-06-11 06:15:57.314165] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:23:26.780 [2024-06-11 06:15:57.314326] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.039 06:15:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.039 "name": "raid_bdev1", 00:23:27.039 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:27.039 "strip_size_kb": 0, 00:23:27.039 "state": "online", 00:23:27.039 "raid_level": "raid1", 00:23:27.039 "superblock": true, 00:23:27.039 "num_base_bdevs": 4, 00:23:27.039 "num_base_bdevs_discovered": 3, 00:23:27.039 "num_base_bdevs_operational": 3, 00:23:27.039 "base_bdevs_list": [ 00:23:27.039 { 00:23:27.039 "name": "spare", 00:23:27.039 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:27.039 "is_configured": true, 00:23:27.039 "data_offset": 2048, 00:23:27.039 "data_size": 63488 00:23:27.039 }, 00:23:27.039 { 00:23:27.039 "name": null, 00:23:27.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.039 "is_configured": false, 00:23:27.039 "data_offset": 2048, 00:23:27.039 "data_size": 63488 00:23:27.039 }, 00:23:27.039 { 00:23:27.039 "name": "BaseBdev3", 00:23:27.039 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:27.039 "is_configured": true, 00:23:27.039 "data_offset": 2048, 00:23:27.039 "data_size": 63488 00:23:27.039 }, 00:23:27.039 { 00:23:27.039 "name": "BaseBdev4", 00:23:27.039 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:27.039 "is_configured": true, 00:23:27.039 "data_offset": 2048, 00:23:27.039 "data_size": 63488 00:23:27.039 } 00:23:27.039 ] 00:23:27.039 }' 00:23:27.039 06:15:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.039 06:15:57 -- common/autotest_common.sh@10 -- # set +x 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.608 06:15:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.608 06:15:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:27.608 "name": "raid_bdev1", 00:23:27.608 "uuid": "c4ee2e9d-b141-4bf9-b5cf-5c5305d95fb7", 00:23:27.608 "strip_size_kb": 0, 00:23:27.608 "state": "online", 00:23:27.608 "raid_level": "raid1", 00:23:27.608 "superblock": true, 00:23:27.608 "num_base_bdevs": 4, 00:23:27.608 "num_base_bdevs_discovered": 3, 00:23:27.608 "num_base_bdevs_operational": 3, 00:23:27.608 "base_bdevs_list": [ 00:23:27.608 { 00:23:27.608 "name": "spare", 00:23:27.608 "uuid": "87172f5c-e9b9-5a51-9fcd-96424eedbde5", 00:23:27.608 "is_configured": true, 00:23:27.608 "data_offset": 2048, 00:23:27.608 "data_size": 63488 00:23:27.608 }, 00:23:27.608 { 00:23:27.608 "name": null, 00:23:27.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.608 "is_configured": false, 00:23:27.608 "data_offset": 2048, 00:23:27.608 "data_size": 63488 00:23:27.608 }, 00:23:27.608 { 00:23:27.608 "name": "BaseBdev3", 00:23:27.608 "uuid": "f976d305-335c-5339-8e2a-27e7487975e6", 00:23:27.608 "is_configured": true, 00:23:27.608 "data_offset": 2048, 00:23:27.608 "data_size": 63488 00:23:27.608 }, 00:23:27.608 { 00:23:27.608 "name": "BaseBdev4", 00:23:27.608 "uuid": "74600e6a-b65c-5d31-96f0-256ea8f8f2b0", 00:23:27.608 "is_configured": true, 00:23:27.608 "data_offset": 2048, 00:23:27.608 "data_size": 63488 00:23:27.608 } 00:23:27.608 ] 00:23:27.608 }' 00:23:27.608 06:15:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:27.868 06:15:58 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:27.868 06:15:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:27.868 06:15:58 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:27.868 06:15:58 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.868 06:15:58 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:28.127 06:15:58 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:28.127 06:15:58 -- bdev/bdev_raid.sh@709 -- # killprocess 126939 00:23:28.127 06:15:58 -- common/autotest_common.sh@926 -- # '[' -z 126939 ']' 00:23:28.127 06:15:58 -- common/autotest_common.sh@930 -- # kill -0 126939 00:23:28.127 06:15:58 -- common/autotest_common.sh@931 -- # uname 00:23:28.127 06:15:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:28.127 06:15:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126939 00:23:28.127 killing process with pid 126939 00:23:28.127 Received shutdown signal, test time was about 15.950766 seconds 00:23:28.127 00:23:28.127 Latency(us) 00:23:28.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.127 =================================================================================================================== 00:23:28.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:28.127 06:15:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:28.127 06:15:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:28.127 06:15:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126939' 00:23:28.127 06:15:58 -- common/autotest_common.sh@945 -- # kill 126939 00:23:28.127 06:15:58 -- common/autotest_common.sh@950 -- # wait 126939 00:23:28.127 [2024-06-11 06:15:58.594077] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:28.127 [2024-06-11 06:15:58.594195] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:28.127 [2024-06-11 06:15:58.594297] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:28.127 [2024-06-11 06:15:58.594311] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:23:28.695 [2024-06-11 06:15:59.033869] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:30.074 ************************************ 00:23:30.074 END TEST raid_rebuild_test_sb_io 00:23:30.074 ************************************ 00:23:30.074 06:16:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:30.074 00:23:30.074 real 0m22.695s 00:23:30.074 user 0m34.832s 00:23:30.074 sys 0m3.687s 00:23:30.074 06:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.074 06:16:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.074 06:16:00 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:23:30.074 06:16:00 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:23:30.074 00:23:30.074 real 8m43.207s 00:23:30.074 user 13m38.872s 00:23:30.075 sys 1m32.233s 00:23:30.075 06:16:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:30.075 06:16:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.075 ************************************ 00:23:30.075 END TEST bdev_raid 00:23:30.075 ************************************ 00:23:30.075 06:16:00 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:23:30.075 06:16:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:30.075 06:16:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:30.075 06:16:00 -- common/autotest_common.sh@10 -- # set +x 00:23:30.075 ************************************ 00:23:30.075 START TEST bdevperf_config 00:23:30.075 ************************************ 00:23:30.075 06:16:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:23:30.075 * Looking for test storage... 00:23:30.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:23:30.075 06:16:00 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:23:30.075 06:16:00 -- bdevperf/common.sh@8 -- # local job_section=global 00:23:30.075 06:16:00 -- bdevperf/common.sh@9 -- # local rw=read 00:23:30.075 06:16:00 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:30.075 06:16:00 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:30.075 06:16:00 -- bdevperf/common.sh@13 -- # cat 00:23:30.075 06:16:00 -- bdevperf/common.sh@18 -- # job='[global]' 00:23:30.075 06:16:00 -- bdevperf/common.sh@19 -- # echo 00:23:30.075 00:23:30.075 06:16:00 -- bdevperf/common.sh@20 -- # cat 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@18 -- # create_job job0 00:23:30.075 06:16:00 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:30.075 06:16:00 -- bdevperf/common.sh@9 -- # local rw= 00:23:30.075 06:16:00 -- bdevperf/common.sh@10 -- # local filename= 00:23:30.075 06:16:00 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:30.075 06:16:00 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:30.075 06:16:00 -- bdevperf/common.sh@19 -- # echo 00:23:30.075 00:23:30.075 06:16:00 -- bdevperf/common.sh@20 -- # cat 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@19 -- # create_job job1 00:23:30.075 06:16:00 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:30.075 06:16:00 -- bdevperf/common.sh@9 -- # local rw= 00:23:30.075 06:16:00 -- bdevperf/common.sh@10 -- # local filename= 00:23:30.075 06:16:00 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:30.075 06:16:00 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:30.075 06:16:00 -- bdevperf/common.sh@19 -- # echo 00:23:30.075 00:23:30.075 06:16:00 -- bdevperf/common.sh@20 -- # cat 00:23:30.075 06:16:00 -- bdevperf/test_config.sh@20 -- # create_job job2 00:23:30.075 06:16:00 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:30.334 06:16:00 -- bdevperf/common.sh@9 -- # local rw= 00:23:30.334 06:16:00 -- bdevperf/common.sh@10 -- # local filename= 00:23:30.334 06:16:00 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:30.334 06:16:00 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:30.334 06:16:00 -- bdevperf/common.sh@19 -- # echo 00:23:30.334 00:23:30.334 06:16:00 -- bdevperf/common.sh@20 -- # cat 00:23:30.334 06:16:00 -- bdevperf/test_config.sh@21 -- # create_job job3 00:23:30.334 06:16:00 -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:30.334 06:16:00 -- bdevperf/common.sh@9 -- # local rw= 00:23:30.334 06:16:00 -- bdevperf/common.sh@10 -- # local filename= 00:23:30.334 06:16:00 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:30.334 06:16:00 -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:30.334 06:16:00 -- bdevperf/common.sh@19 -- # echo 00:23:30.334 00:23:30.334 06:16:00 -- bdevperf/common.sh@20 -- # cat 00:23:30.334 06:16:00 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:35.607 06:16:05 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-06-11 06:16:00.815224] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:35.607 [2024-06-11 06:16:00.815428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127601 ] 00:23:35.607 Using job config with 4 jobs 00:23:35.607 [2024-06-11 06:16:00.998081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.607 [2024-06-11 06:16:01.268208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.607 cpumask for '\''job0'\'' is too big 00:23:35.607 cpumask for '\''job1'\'' is too big 00:23:35.607 cpumask for '\''job2'\'' is too big 00:23:35.607 cpumask for '\''job3'\'' is too big 00:23:35.607 Running I/O for 2 seconds... 00:23:35.607 00:23:35.607 Latency(us) 00:23:35.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.01 35331.24 34.50 0.00 0.00 7239.87 1443.35 12108.56 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35309.71 34.48 0.00 0.00 7231.91 1412.14 10673.01 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35288.53 34.46 0.00 0.00 7223.87 1373.14 9237.46 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35267.58 34.44 0.00 0.00 7216.38 1388.74 7895.53 00:23:35.607 =================================================================================================================== 00:23:35.607 Total : 141197.06 137.89 0.00 0.00 7228.01 1373.14 12108.56' 00:23:35.607 06:16:05 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-06-11 06:16:00.815224] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:35.607 [2024-06-11 06:16:00.815428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127601 ] 00:23:35.607 Using job config with 4 jobs 00:23:35.607 [2024-06-11 06:16:00.998081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.607 [2024-06-11 06:16:01.268208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.607 cpumask for '\''job0'\'' is too big 00:23:35.607 cpumask for '\''job1'\'' is too big 00:23:35.607 cpumask for '\''job2'\'' is too big 00:23:35.607 cpumask for '\''job3'\'' is too big 00:23:35.607 Running I/O for 2 seconds... 00:23:35.607 00:23:35.607 Latency(us) 00:23:35.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.01 35331.24 34.50 0.00 0.00 7239.87 1443.35 12108.56 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35309.71 34.48 0.00 0.00 7231.91 1412.14 10673.01 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35288.53 34.46 0.00 0.00 7223.87 1373.14 9237.46 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35267.58 34.44 0.00 0.00 7216.38 1388.74 7895.53 00:23:35.607 =================================================================================================================== 00:23:35.607 Total : 141197.06 137.89 0.00 0.00 7228.01 1373.14 12108.56' 00:23:35.607 06:16:05 -- bdevperf/common.sh@32 -- # echo '[2024-06-11 06:16:00.815224] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:35.607 [2024-06-11 06:16:00.815428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127601 ] 00:23:35.607 Using job config with 4 jobs 00:23:35.607 [2024-06-11 06:16:00.998081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.607 [2024-06-11 06:16:01.268208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.607 cpumask for '\''job0'\'' is too big 00:23:35.607 cpumask for '\''job1'\'' is too big 00:23:35.607 cpumask for '\''job2'\'' is too big 00:23:35.607 cpumask for '\''job3'\'' is too big 00:23:35.607 Running I/O for 2 seconds... 00:23:35.607 00:23:35.607 Latency(us) 00:23:35.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.01 35331.24 34.50 0.00 0.00 7239.87 1443.35 12108.56 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35309.71 34.48 0.00 0.00 7231.91 1412.14 10673.01 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35288.53 34.46 0.00 0.00 7223.87 1373.14 9237.46 00:23:35.607 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:35.607 Malloc0 : 2.02 35267.58 34.44 0.00 0.00 7216.38 1388.74 7895.53 00:23:35.607 =================================================================================================================== 00:23:35.607 Total : 141197.06 137.89 0.00 0.00 7228.01 1373.14 12108.56' 00:23:35.607 06:16:05 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:35.607 06:16:05 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:35.607 06:16:05 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:23:35.607 06:16:05 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:35.607 [2024-06-11 06:16:05.553774] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:35.607 [2024-06-11 06:16:05.553924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127659 ] 00:23:35.607 [2024-06-11 06:16:05.715567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.607 [2024-06-11 06:16:05.954110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.866 cpumask for 'job0' is too big 00:23:35.866 cpumask for 'job1' is too big 00:23:35.866 cpumask for 'job2' is too big 00:23:35.866 cpumask for 'job3' is too big 00:23:40.084 06:16:10 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:23:40.084 Running I/O for 2 seconds... 00:23:40.084 00:23:40.084 Latency(us) 00:23:40.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.084 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.084 Malloc0 : 2.01 35477.78 34.65 0.00 0.00 7210.36 1341.93 11109.91 00:23:40.084 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.084 Malloc0 : 2.01 35455.76 34.62 0.00 0.00 7203.58 1271.71 9861.61 00:23:40.084 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.084 Malloc0 : 2.02 35434.57 34.60 0.00 0.00 7196.36 1295.12 8550.89 00:23:40.084 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:23:40.084 Malloc0 : 2.02 35413.17 34.58 0.00 0.00 7190.45 1302.92 8051.57 00:23:40.084 =================================================================================================================== 00:23:40.084 Total : 141781.27 138.46 0.00 0.00 7200.19 1271.71 11109.91' 00:23:40.084 06:16:10 -- bdevperf/test_config.sh@27 -- # cleanup 00:23:40.084 06:16:10 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:40.084 00:23:40.084 06:16:10 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:23:40.084 06:16:10 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:40.084 06:16:10 -- bdevperf/common.sh@9 -- # local rw=write 00:23:40.084 06:16:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:40.084 06:16:10 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:40.084 06:16:10 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:40.084 06:16:10 -- bdevperf/common.sh@19 -- # echo 00:23:40.084 06:16:10 -- bdevperf/common.sh@20 -- # cat 00:23:40.084 06:16:10 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:23:40.084 06:16:10 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:40.084 06:16:10 -- bdevperf/common.sh@9 -- # local rw=write 00:23:40.084 06:16:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:40.084 06:16:10 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:40.085 06:16:10 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:40.085 06:16:10 -- bdevperf/common.sh@19 -- # echo 00:23:40.085 00:23:40.085 06:16:10 -- bdevperf/common.sh@20 -- # cat 00:23:40.085 06:16:10 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:23:40.085 06:16:10 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:40.085 06:16:10 -- bdevperf/common.sh@9 -- # local rw=write 00:23:40.085 06:16:10 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:23:40.085 06:16:10 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:40.085 06:16:10 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:40.085 00:23:40.085 06:16:10 -- bdevperf/common.sh@19 -- # echo 00:23:40.085 06:16:10 -- bdevperf/common.sh@20 -- # cat 00:23:40.085 06:16:10 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:44.278 06:16:14 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-06-11 06:16:10.279948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:44.279 [2024-06-11 06:16:10.280101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127720 ] 00:23:44.279 Using job config with 3 jobs 00:23:44.279 [2024-06-11 06:16:10.442232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.279 [2024-06-11 06:16:10.694116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.279 cpumask for '\''job0'\'' is too big 00:23:44.279 cpumask for '\''job1'\'' is too big 00:23:44.279 cpumask for '\''job2'\'' is too big 00:23:44.279 Running I/O for 2 seconds... 00:23:44.279 00:23:44.279 Latency(us) 00:23:44.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47859.37 46.74 0.00 0.00 5343.80 1326.32 8176.40 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47828.40 46.71 0.00 0.00 5338.81 1310.72 6803.26 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47799.00 46.68 0.00 0.00 5334.36 1279.51 6241.52 00:23:44.279 =================================================================================================================== 00:23:44.279 Total : 143486.77 140.12 0.00 0.00 5338.99 1279.51 8176.40' 00:23:44.279 06:16:14 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-06-11 06:16:10.279948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:44.279 [2024-06-11 06:16:10.280101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127720 ] 00:23:44.279 Using job config with 3 jobs 00:23:44.279 [2024-06-11 06:16:10.442232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.279 [2024-06-11 06:16:10.694116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.279 cpumask for '\''job0'\'' is too big 00:23:44.279 cpumask for '\''job1'\'' is too big 00:23:44.279 cpumask for '\''job2'\'' is too big 00:23:44.279 Running I/O for 2 seconds... 00:23:44.279 00:23:44.279 Latency(us) 00:23:44.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47859.37 46.74 0.00 0.00 5343.80 1326.32 8176.40 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47828.40 46.71 0.00 0.00 5338.81 1310.72 6803.26 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47799.00 46.68 0.00 0.00 5334.36 1279.51 6241.52 00:23:44.279 =================================================================================================================== 00:23:44.279 Total : 143486.77 140.12 0.00 0.00 5338.99 1279.51 8176.40' 00:23:44.279 06:16:14 -- bdevperf/common.sh@32 -- # echo '[2024-06-11 06:16:10.279948] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:44.279 [2024-06-11 06:16:10.280101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127720 ] 00:23:44.279 Using job config with 3 jobs 00:23:44.279 [2024-06-11 06:16:10.442232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.279 [2024-06-11 06:16:10.694116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.279 cpumask for '\''job0'\'' is too big 00:23:44.279 cpumask for '\''job1'\'' is too big 00:23:44.279 cpumask for '\''job2'\'' is too big 00:23:44.279 Running I/O for 2 seconds... 00:23:44.279 00:23:44.279 Latency(us) 00:23:44.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47859.37 46.74 0.00 0.00 5343.80 1326.32 8176.40 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47828.40 46.71 0.00 0.00 5338.81 1310.72 6803.26 00:23:44.279 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:23:44.279 Malloc0 : 2.01 47799.00 46.68 0.00 0.00 5334.36 1279.51 6241.52 00:23:44.279 =================================================================================================================== 00:23:44.279 Total : 143486.77 140.12 0.00 0.00 5338.99 1279.51 8176.40' 00:23:44.279 06:16:14 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:44.279 06:16:14 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:44.279 06:16:14 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:23:44.279 06:16:14 -- bdevperf/test_config.sh@35 -- # cleanup 00:23:44.279 06:16:14 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:44.279 06:16:14 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:23:44.279 06:16:14 -- bdevperf/common.sh@8 -- # local job_section=global 00:23:44.279 06:16:14 -- bdevperf/common.sh@9 -- # local rw=rw 00:23:44.279 06:16:14 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:23:44.279 06:16:14 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:23:44.279 06:16:14 -- bdevperf/common.sh@13 -- # cat 00:23:44.279 06:16:14 -- bdevperf/common.sh@18 -- # job='[global]' 00:23:44.279 00:23:44.279 06:16:14 -- bdevperf/common.sh@19 -- # echo 00:23:44.279 06:16:14 -- bdevperf/common.sh@20 -- # cat 00:23:44.279 06:16:14 -- bdevperf/test_config.sh@38 -- # create_job job0 00:23:44.279 06:16:14 -- bdevperf/common.sh@8 -- # local job_section=job0 00:23:44.279 06:16:14 -- bdevperf/common.sh@9 -- # local rw= 00:23:44.279 06:16:14 -- bdevperf/common.sh@10 -- # local filename= 00:23:44.279 06:16:14 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:23:44.279 06:16:14 -- bdevperf/common.sh@18 -- # job='[job0]' 00:23:44.279 00:23:44.279 06:16:14 -- bdevperf/common.sh@19 -- # echo 00:23:44.279 06:16:14 -- bdevperf/common.sh@20 -- # cat 00:23:44.279 06:16:14 -- bdevperf/test_config.sh@39 -- # create_job job1 00:23:44.279 06:16:14 -- bdevperf/common.sh@8 -- # local job_section=job1 00:23:44.279 06:16:14 -- bdevperf/common.sh@9 -- # local rw= 00:23:44.279 06:16:14 -- bdevperf/common.sh@10 -- # local filename= 00:23:44.279 06:16:14 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:23:44.279 00:23:44.279 06:16:14 -- bdevperf/common.sh@18 -- # job='[job1]' 00:23:44.279 06:16:14 -- bdevperf/common.sh@19 -- # echo 00:23:44.279 06:16:14 -- bdevperf/common.sh@20 -- # cat 00:23:44.539 06:16:14 -- bdevperf/test_config.sh@40 -- # create_job job2 00:23:44.539 06:16:14 -- bdevperf/common.sh@8 -- # local job_section=job2 00:23:44.539 06:16:14 -- bdevperf/common.sh@9 -- # local rw= 00:23:44.539 06:16:14 -- bdevperf/common.sh@10 -- # local filename= 00:23:44.539 06:16:14 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:23:44.539 06:16:14 -- bdevperf/common.sh@18 -- # job='[job2]' 00:23:44.539 00:23:44.539 06:16:14 -- bdevperf/common.sh@19 -- # echo 00:23:44.539 06:16:14 -- bdevperf/common.sh@20 -- # cat 00:23:44.539 06:16:14 -- bdevperf/test_config.sh@41 -- # create_job job3 00:23:44.539 06:16:14 -- bdevperf/common.sh@8 -- # local job_section=job3 00:23:44.539 06:16:14 -- bdevperf/common.sh@9 -- # local rw= 00:23:44.539 06:16:14 -- bdevperf/common.sh@10 -- # local filename= 00:23:44.539 06:16:14 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:23:44.539 06:16:14 -- bdevperf/common.sh@18 -- # job='[job3]' 00:23:44.539 00:23:44.539 06:16:14 -- bdevperf/common.sh@19 -- # echo 00:23:44.539 06:16:14 -- bdevperf/common.sh@20 -- # cat 00:23:44.539 06:16:14 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:49.814 06:16:19 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-06-11 06:16:15.020620] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:49.814 [2024-06-11 06:16:15.020839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127790 ] 00:23:49.814 Using job config with 4 jobs 00:23:49.814 [2024-06-11 06:16:15.204555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.814 [2024-06-11 06:16:15.465359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.814 cpumask for '\''job0'\'' is too big 00:23:49.814 cpumask for '\''job1'\'' is too big 00:23:49.815 cpumask for '\''job2'\'' is too big 00:23:49.815 cpumask for '\''job3'\'' is too big 00:23:49.815 Running I/O for 2 seconds... 00:23:49.815 00:23:49.815 Latency(us) 00:23:49.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.02 17774.00 17.36 0.00 0.00 14393.20 2793.08 22594.32 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.03 17777.48 17.36 0.00 0.00 14381.84 3245.59 22594.32 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.03 17766.97 17.35 0.00 0.00 14354.35 2652.65 19848.05 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.03 17755.94 17.34 0.00 0.00 14353.59 3136.37 19848.05 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.03 17745.40 17.33 0.00 0.00 14327.35 2683.86 17101.78 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.04 17734.62 17.32 0.00 0.00 14326.32 3198.78 17101.78 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.04 17724.06 17.31 0.00 0.00 14299.83 2699.46 15666.22 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.04 17713.27 17.30 0.00 0.00 14297.72 3198.78 15666.22 00:23:49.815 =================================================================================================================== 00:23:49.815 Total : 141991.73 138.66 0.00 0.00 14341.73 2652.65 22594.32' 00:23:49.815 06:16:19 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-06-11 06:16:15.020620] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:49.815 [2024-06-11 06:16:15.020839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127790 ] 00:23:49.815 Using job config with 4 jobs 00:23:49.815 [2024-06-11 06:16:15.204555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.815 [2024-06-11 06:16:15.465359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.815 cpumask for '\''job0'\'' is too big 00:23:49.815 cpumask for '\''job1'\'' is too big 00:23:49.815 cpumask for '\''job2'\'' is too big 00:23:49.815 cpumask for '\''job3'\'' is too big 00:23:49.815 Running I/O for 2 seconds... 00:23:49.815 00:23:49.815 Latency(us) 00:23:49.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.02 17774.00 17.36 0.00 0.00 14393.20 2793.08 22594.32 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.03 17777.48 17.36 0.00 0.00 14381.84 3245.59 22594.32 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.03 17766.97 17.35 0.00 0.00 14354.35 2652.65 19848.05 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.03 17755.94 17.34 0.00 0.00 14353.59 3136.37 19848.05 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.03 17745.40 17.33 0.00 0.00 14327.35 2683.86 17101.78 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.04 17734.62 17.32 0.00 0.00 14326.32 3198.78 17101.78 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.04 17724.06 17.31 0.00 0.00 14299.83 2699.46 15666.22 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.04 17713.27 17.30 0.00 0.00 14297.72 3198.78 15666.22 00:23:49.815 =================================================================================================================== 00:23:49.815 Total : 141991.73 138.66 0.00 0.00 14341.73 2652.65 22594.32' 00:23:49.815 06:16:19 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:23:49.815 06:16:19 -- bdevperf/common.sh@32 -- # echo '[2024-06-11 06:16:15.020620] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:49.815 [2024-06-11 06:16:15.020839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127790 ] 00:23:49.815 Using job config with 4 jobs 00:23:49.815 [2024-06-11 06:16:15.204555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.815 [2024-06-11 06:16:15.465359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.815 cpumask for '\''job0'\'' is too big 00:23:49.815 cpumask for '\''job1'\'' is too big 00:23:49.815 cpumask for '\''job2'\'' is too big 00:23:49.815 cpumask for '\''job3'\'' is too big 00:23:49.815 Running I/O for 2 seconds... 00:23:49.815 00:23:49.815 Latency(us) 00:23:49.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.02 17774.00 17.36 0.00 0.00 14393.20 2793.08 22594.32 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.03 17777.48 17.36 0.00 0.00 14381.84 3245.59 22594.32 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.03 17766.97 17.35 0.00 0.00 14354.35 2652.65 19848.05 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.03 17755.94 17.34 0.00 0.00 14353.59 3136.37 19848.05 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.03 17745.40 17.33 0.00 0.00 14327.35 2683.86 17101.78 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.04 17734.62 17.32 0.00 0.00 14326.32 3198.78 17101.78 00:23:49.815 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc0 : 2.04 17724.06 17.31 0.00 0.00 14299.83 2699.46 15666.22 00:23:49.815 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:23:49.815 Malloc1 : 2.04 17713.27 17.30 0.00 0.00 14297.72 3198.78 15666.22 00:23:49.815 =================================================================================================================== 00:23:49.815 Total : 141991.73 138.66 0.00 0.00 14341.73 2652.65 22594.32' 00:23:49.815 06:16:19 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:23:49.815 06:16:19 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:23:49.815 06:16:19 -- bdevperf/test_config.sh@44 -- # cleanup 00:23:49.815 06:16:19 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:23:49.815 06:16:19 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:49.815 00:23:49.815 real 0m19.126s 00:23:49.815 user 0m16.903s 00:23:49.815 sys 0m1.653s 00:23:49.815 06:16:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.815 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:49.815 ************************************ 00:23:49.815 END TEST bdevperf_config 00:23:49.815 ************************************ 00:23:49.815 06:16:19 -- spdk/autotest.sh@198 -- # uname -s 00:23:49.815 06:16:19 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:23:49.815 06:16:19 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:49.815 06:16:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:49.815 06:16:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:49.815 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:23:49.815 ************************************ 00:23:49.815 START TEST reactor_set_interrupt 00:23:49.815 ************************************ 00:23:49.815 06:16:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:49.815 * Looking for test storage... 00:23:49.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.815 06:16:19 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:23:49.815 06:16:19 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:23:49.815 06:16:19 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.815 06:16:19 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.815 06:16:19 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:23:49.815 06:16:19 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:49.815 06:16:19 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:23:49.815 06:16:19 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:23:49.815 06:16:19 -- common/autotest_common.sh@34 -- # set -e 00:23:49.815 06:16:19 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:23:49.815 06:16:19 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:23:49.815 06:16:19 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:23:49.815 06:16:19 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:23:49.816 06:16:19 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:23:49.816 06:16:19 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:23:49.816 06:16:19 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:23:49.816 06:16:19 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:23:49.816 06:16:19 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:23:49.816 06:16:19 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:23:49.816 06:16:19 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:23:49.816 06:16:19 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:23:49.816 06:16:19 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:23:49.816 06:16:19 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:23:49.816 06:16:19 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:23:49.816 06:16:19 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:23:49.816 06:16:19 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:23:49.816 06:16:19 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:23:49.816 06:16:19 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:23:49.816 06:16:19 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:23:49.816 06:16:19 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:49.816 06:16:19 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:23:49.816 06:16:19 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:23:49.816 06:16:19 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:23:49.816 06:16:19 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:23:49.816 06:16:19 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:23:49.816 06:16:19 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:23:49.816 06:16:19 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:23:49.816 06:16:19 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:23:49.816 06:16:19 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:23:49.816 06:16:19 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:23:49.816 06:16:19 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:23:49.816 06:16:19 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:23:49.816 06:16:19 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:23:49.816 06:16:19 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:23:49.816 06:16:19 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:23:49.816 06:16:19 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:23:49.816 06:16:19 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:23:49.816 06:16:19 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:23:49.816 06:16:19 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:23:49.816 06:16:19 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:23:49.816 06:16:19 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:23:49.816 06:16:19 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:23:49.816 06:16:19 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:23:49.816 06:16:19 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:23:49.816 06:16:19 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:23:49.816 06:16:19 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:23:49.816 06:16:19 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:23:49.816 06:16:19 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:23:49.816 06:16:19 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:23:49.816 06:16:19 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:23:49.816 06:16:19 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:23:49.816 06:16:19 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:23:49.816 06:16:19 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:23:49.816 06:16:19 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:23:49.816 06:16:19 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:23:49.816 06:16:19 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:23:49.816 06:16:19 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:23:49.816 06:16:19 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:23:49.816 06:16:19 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:23:49.816 06:16:19 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:23:49.816 06:16:19 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:23:49.816 06:16:19 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:23:49.816 06:16:19 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:23:49.816 06:16:19 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:23:49.816 06:16:19 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:23:49.816 06:16:19 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:23:49.816 06:16:19 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:23:49.816 06:16:19 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:23:49.816 06:16:19 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:23:49.816 06:16:19 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:23:49.816 06:16:19 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:23:49.816 06:16:19 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:23:49.816 06:16:19 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:23:49.816 06:16:19 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:23:49.816 06:16:19 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:23:49.816 06:16:19 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:23:49.816 06:16:19 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:49.816 06:16:19 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:23:49.816 06:16:19 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:23:49.816 06:16:19 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:23:49.816 06:16:19 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:23:49.816 06:16:19 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:23:49.816 06:16:19 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:23:49.816 06:16:19 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:23:49.816 06:16:19 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:23:49.816 06:16:19 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:23:49.816 06:16:19 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:23:49.816 06:16:19 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:23:49.816 06:16:19 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:23:49.816 06:16:19 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:23:49.816 06:16:19 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:23:49.816 06:16:19 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:23:49.816 #define SPDK_CONFIG_H 00:23:49.816 #define SPDK_CONFIG_APPS 1 00:23:49.816 #define SPDK_CONFIG_ARCH native 00:23:49.816 #define SPDK_CONFIG_ASAN 1 00:23:49.816 #undef SPDK_CONFIG_AVAHI 00:23:49.816 #undef SPDK_CONFIG_CET 00:23:49.816 #define SPDK_CONFIG_COVERAGE 1 00:23:49.816 #define SPDK_CONFIG_CROSS_PREFIX 00:23:49.816 #undef SPDK_CONFIG_CRYPTO 00:23:49.816 #undef SPDK_CONFIG_CRYPTO_MLX5 00:23:49.816 #undef SPDK_CONFIG_CUSTOMOCF 00:23:49.816 #undef SPDK_CONFIG_DAOS 00:23:49.816 #define SPDK_CONFIG_DAOS_DIR 00:23:49.816 #define SPDK_CONFIG_DEBUG 1 00:23:49.816 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:23:49.816 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:23:49.816 #define SPDK_CONFIG_DPDK_INC_DIR 00:23:49.816 #define SPDK_CONFIG_DPDK_LIB_DIR 00:23:49.816 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:23:49.816 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:23:49.816 #define SPDK_CONFIG_EXAMPLES 1 00:23:49.816 #undef SPDK_CONFIG_FC 00:23:49.816 #define SPDK_CONFIG_FC_PATH 00:23:49.816 #define SPDK_CONFIG_FIO_PLUGIN 1 00:23:49.816 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:23:49.816 #undef SPDK_CONFIG_FUSE 00:23:49.816 #undef SPDK_CONFIG_FUZZER 00:23:49.816 #define SPDK_CONFIG_FUZZER_LIB 00:23:49.816 #undef SPDK_CONFIG_GOLANG 00:23:49.816 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:23:49.816 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:23:49.816 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:23:49.816 #undef SPDK_CONFIG_HAVE_LIBBSD 00:23:49.816 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:23:49.816 #define SPDK_CONFIG_IDXD 1 00:23:49.816 #undef SPDK_CONFIG_IDXD_KERNEL 00:23:49.816 #undef SPDK_CONFIG_IPSEC_MB 00:23:49.816 #define SPDK_CONFIG_IPSEC_MB_DIR 00:23:49.816 #define SPDK_CONFIG_ISAL 1 00:23:49.816 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:23:49.816 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:23:49.816 #define SPDK_CONFIG_LIBDIR 00:23:49.816 #undef SPDK_CONFIG_LTO 00:23:49.816 #define SPDK_CONFIG_MAX_LCORES 00:23:49.816 #define SPDK_CONFIG_NVME_CUSE 1 00:23:49.816 #undef SPDK_CONFIG_OCF 00:23:49.816 #define SPDK_CONFIG_OCF_PATH 00:23:49.816 #define SPDK_CONFIG_OPENSSL_PATH 00:23:49.816 #undef SPDK_CONFIG_PGO_CAPTURE 00:23:49.816 #undef SPDK_CONFIG_PGO_USE 00:23:49.816 #define SPDK_CONFIG_PREFIX /usr/local 00:23:49.816 #undef SPDK_CONFIG_RAID5F 00:23:49.816 #undef SPDK_CONFIG_RBD 00:23:49.816 #define SPDK_CONFIG_RDMA 1 00:23:49.816 #define SPDK_CONFIG_RDMA_PROV verbs 00:23:49.816 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:23:49.816 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:23:49.816 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:23:49.816 #undef SPDK_CONFIG_SHARED 00:23:49.816 #undef SPDK_CONFIG_SMA 00:23:49.816 #define SPDK_CONFIG_TESTS 1 00:23:49.816 #undef SPDK_CONFIG_TSAN 00:23:49.816 #undef SPDK_CONFIG_UBLK 00:23:49.816 #define SPDK_CONFIG_UBSAN 1 00:23:49.816 #define SPDK_CONFIG_UNIT_TESTS 1 00:23:49.816 #undef SPDK_CONFIG_URING 00:23:49.816 #define SPDK_CONFIG_URING_PATH 00:23:49.816 #undef SPDK_CONFIG_URING_ZNS 00:23:49.816 #undef SPDK_CONFIG_USDT 00:23:49.816 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:23:49.816 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:23:49.816 #undef SPDK_CONFIG_VFIO_USER 00:23:49.816 #define SPDK_CONFIG_VFIO_USER_DIR 00:23:49.816 #define SPDK_CONFIG_VHOST 1 00:23:49.816 #define SPDK_CONFIG_VIRTIO 1 00:23:49.816 #undef SPDK_CONFIG_VTUNE 00:23:49.816 #define SPDK_CONFIG_VTUNE_DIR 00:23:49.816 #define SPDK_CONFIG_WERROR 1 00:23:49.816 #define SPDK_CONFIG_WPDK_DIR 00:23:49.816 #undef SPDK_CONFIG_XNVME 00:23:49.817 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:23:49.817 06:16:19 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:23:49.817 06:16:19 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.817 06:16:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.817 06:16:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.817 06:16:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.817 06:16:19 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.817 06:16:19 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.817 06:16:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.817 06:16:19 -- paths/export.sh@5 -- # export PATH 00:23:49.817 06:16:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:23:49.817 06:16:19 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:49.817 06:16:19 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:23:49.817 06:16:19 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:49.817 06:16:19 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:23:49.817 06:16:19 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:23:49.817 06:16:19 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:23:49.817 06:16:19 -- pm/common@16 -- # TEST_TAG=N/A 00:23:49.817 06:16:19 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:23:49.817 06:16:19 -- common/autotest_common.sh@52 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:23:49.817 06:16:19 -- common/autotest_common.sh@56 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:23:49.817 06:16:19 -- common/autotest_common.sh@58 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:23:49.817 06:16:19 -- common/autotest_common.sh@60 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:23:49.817 06:16:19 -- common/autotest_common.sh@62 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:23:49.817 06:16:19 -- common/autotest_common.sh@64 -- # : 00:23:49.817 06:16:19 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:23:49.817 06:16:19 -- common/autotest_common.sh@66 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:23:49.817 06:16:19 -- common/autotest_common.sh@68 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:23:49.817 06:16:19 -- common/autotest_common.sh@70 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:23:49.817 06:16:19 -- common/autotest_common.sh@72 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:23:49.817 06:16:19 -- common/autotest_common.sh@74 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:23:49.817 06:16:19 -- common/autotest_common.sh@76 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:23:49.817 06:16:19 -- common/autotest_common.sh@78 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:23:49.817 06:16:19 -- common/autotest_common.sh@80 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:23:49.817 06:16:19 -- common/autotest_common.sh@82 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:23:49.817 06:16:19 -- common/autotest_common.sh@84 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:23:49.817 06:16:19 -- common/autotest_common.sh@86 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:23:49.817 06:16:19 -- common/autotest_common.sh@88 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:23:49.817 06:16:19 -- common/autotest_common.sh@90 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:23:49.817 06:16:19 -- common/autotest_common.sh@92 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:23:49.817 06:16:19 -- common/autotest_common.sh@94 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:23:49.817 06:16:19 -- common/autotest_common.sh@96 -- # : rdma 00:23:49.817 06:16:19 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:23:49.817 06:16:19 -- common/autotest_common.sh@98 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:23:49.817 06:16:19 -- common/autotest_common.sh@100 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:23:49.817 06:16:19 -- common/autotest_common.sh@102 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:23:49.817 06:16:19 -- common/autotest_common.sh@104 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:23:49.817 06:16:19 -- common/autotest_common.sh@106 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:23:49.817 06:16:19 -- common/autotest_common.sh@108 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:23:49.817 06:16:19 -- common/autotest_common.sh@110 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:23:49.817 06:16:19 -- common/autotest_common.sh@112 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:23:49.817 06:16:19 -- common/autotest_common.sh@114 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:23:49.817 06:16:19 -- common/autotest_common.sh@116 -- # : 1 00:23:49.817 06:16:19 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:23:49.817 06:16:19 -- common/autotest_common.sh@118 -- # : 00:23:49.817 06:16:19 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:23:49.817 06:16:19 -- common/autotest_common.sh@120 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:23:49.817 06:16:19 -- common/autotest_common.sh@122 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:23:49.817 06:16:19 -- common/autotest_common.sh@124 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:23:49.817 06:16:19 -- common/autotest_common.sh@126 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:23:49.817 06:16:19 -- common/autotest_common.sh@128 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:23:49.817 06:16:19 -- common/autotest_common.sh@130 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:23:49.817 06:16:19 -- common/autotest_common.sh@132 -- # : 00:23:49.817 06:16:19 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:23:49.817 06:16:19 -- common/autotest_common.sh@134 -- # : true 00:23:49.817 06:16:19 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:23:49.817 06:16:19 -- common/autotest_common.sh@136 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:23:49.817 06:16:19 -- common/autotest_common.sh@138 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:23:49.817 06:16:19 -- common/autotest_common.sh@140 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:23:49.817 06:16:19 -- common/autotest_common.sh@142 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:23:49.817 06:16:19 -- common/autotest_common.sh@144 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:23:49.817 06:16:19 -- common/autotest_common.sh@146 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:23:49.817 06:16:19 -- common/autotest_common.sh@148 -- # : 00:23:49.817 06:16:19 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:23:49.817 06:16:19 -- common/autotest_common.sh@150 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:23:49.817 06:16:19 -- common/autotest_common.sh@152 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:23:49.817 06:16:19 -- common/autotest_common.sh@154 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:23:49.817 06:16:19 -- common/autotest_common.sh@156 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:23:49.817 06:16:19 -- common/autotest_common.sh@158 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:23:49.817 06:16:19 -- common/autotest_common.sh@160 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:23:49.817 06:16:19 -- common/autotest_common.sh@163 -- # : 00:23:49.817 06:16:19 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:23:49.817 06:16:19 -- common/autotest_common.sh@165 -- # : 0 00:23:49.817 06:16:19 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:23:49.817 06:16:19 -- common/autotest_common.sh@167 -- # : 0 00:23:49.817 06:16:20 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:23:49.817 06:16:20 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:49.817 06:16:20 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:23:49.817 06:16:20 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:49.818 06:16:20 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:23:49.818 06:16:20 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.818 06:16:20 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.818 06:16:20 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.818 06:16:20 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:23:49.818 06:16:20 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:23:49.818 06:16:20 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:23:49.818 06:16:20 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:49.818 06:16:20 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:49.818 06:16:20 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:23:49.818 06:16:20 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:23:49.818 06:16:20 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:49.818 06:16:20 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:23:49.818 06:16:20 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:49.818 06:16:20 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:23:49.818 06:16:20 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:23:49.818 06:16:20 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:23:49.818 06:16:20 -- common/autotest_common.sh@196 -- # cat 00:23:49.818 06:16:20 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:23:49.818 06:16:20 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:49.818 06:16:20 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:23:49.818 06:16:20 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:49.818 06:16:20 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:23:49.818 06:16:20 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:23:49.818 06:16:20 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:23:49.818 06:16:20 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:49.818 06:16:20 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:23:49.818 06:16:20 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:49.818 06:16:20 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:23:49.818 06:16:20 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:23:49.818 06:16:20 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:23:49.818 06:16:20 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:49.818 06:16:20 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:23:49.818 06:16:20 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:49.818 06:16:20 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:23:49.818 06:16:20 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:49.818 06:16:20 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:23:49.818 06:16:20 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:23:49.818 06:16:20 -- common/autotest_common.sh@249 -- # export valgrind= 00:23:49.818 06:16:20 -- common/autotest_common.sh@249 -- # valgrind= 00:23:49.818 06:16:20 -- common/autotest_common.sh@255 -- # uname -s 00:23:49.818 06:16:20 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:23:49.818 06:16:20 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:23:49.818 06:16:20 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:23:49.818 06:16:20 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:23:49.818 06:16:20 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:23:49.818 06:16:20 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:23:49.818 06:16:20 -- common/autotest_common.sh@265 -- # MAKE=make 00:23:49.818 06:16:20 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:23:49.818 06:16:20 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:23:49.818 06:16:20 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:23:49.818 06:16:20 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:23:49.818 06:16:20 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:23:49.818 06:16:20 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:23:49.818 06:16:20 -- common/autotest_common.sh@309 -- # [[ -z 127883 ]] 00:23:49.818 06:16:20 -- common/autotest_common.sh@309 -- # kill -0 127883 00:23:49.818 06:16:20 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:23:49.818 06:16:20 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:23:49.818 06:16:20 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:23:49.818 06:16:20 -- common/autotest_common.sh@322 -- # local mount target_dir 00:23:49.818 06:16:20 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:23:49.818 06:16:20 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:23:49.818 06:16:20 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:23:49.818 06:16:20 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:23:49.818 06:16:20 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.S6UiVX 00:23:49.818 06:16:20 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:23:49.818 06:16:20 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:23:49.818 06:16:20 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:23:49.818 06:16:20 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.S6UiVX/tests/interrupt /tmp/spdk.S6UiVX 00:23:49.818 06:16:20 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@318 -- # df -T 00:23:49.818 06:16:20 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=10273681408 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=10326335488 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265810944 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:23:49.818 06:16:20 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # avails["$mount"]=97224839168 00:23:49.818 06:16:20 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:23:49.818 06:16:20 -- common/autotest_common.sh@354 -- # uses["$mount"]=2477940736 00:23:49.818 06:16:20 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:23:49.818 06:16:20 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:23:49.818 * Looking for test storage... 00:23:49.818 06:16:20 -- common/autotest_common.sh@359 -- # local target_space new_size 00:23:49.818 06:16:20 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:23:49.819 06:16:20 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.819 06:16:20 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:23:49.819 06:16:20 -- common/autotest_common.sh@363 -- # mount=/ 00:23:49.819 06:16:20 -- common/autotest_common.sh@365 -- # target_space=10273681408 00:23:49.819 06:16:20 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:23:49.819 06:16:20 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:23:49.819 06:16:20 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:23:49.819 06:16:20 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:23:49.819 06:16:20 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:23:49.819 06:16:20 -- common/autotest_common.sh@372 -- # new_size=12540928000 00:23:49.819 06:16:20 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:23:49.819 06:16:20 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.819 06:16:20 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.819 06:16:20 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:23:49.819 06:16:20 -- common/autotest_common.sh@380 -- # return 0 00:23:49.819 06:16:20 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:23:49.819 06:16:20 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:23:49.819 06:16:20 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:23:49.819 06:16:20 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:23:49.819 06:16:20 -- common/autotest_common.sh@1672 -- # true 00:23:49.819 06:16:20 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:23:49.819 06:16:20 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:23:49.819 06:16:20 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:23:49.819 06:16:20 -- common/autotest_common.sh@27 -- # exec 00:23:49.819 06:16:20 -- common/autotest_common.sh@29 -- # exec 00:23:49.819 06:16:20 -- common/autotest_common.sh@31 -- # xtrace_restore 00:23:49.819 06:16:20 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:23:49.819 06:16:20 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:23:49.819 06:16:20 -- common/autotest_common.sh@18 -- # set -x 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:23:49.819 06:16:20 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:49.819 06:16:20 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:23:49.819 06:16:20 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=127932 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:49.819 06:16:20 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 127932 /var/tmp/spdk.sock 00:23:49.819 06:16:20 -- common/autotest_common.sh@819 -- # '[' -z 127932 ']' 00:23:49.819 06:16:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.819 06:16:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:49.819 06:16:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.819 06:16:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:49.819 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:23:49.819 [2024-06-11 06:16:20.170132] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:49.819 [2024-06-11 06:16:20.171012] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127932 ] 00:23:49.819 [2024-06-11 06:16:20.362177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:50.077 [2024-06-11 06:16:20.606406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.077 [2024-06-11 06:16:20.606597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.077 [2024-06-11 06:16:20.606603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.643 [2024-06-11 06:16:20.994490] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:50.643 06:16:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.643 06:16:21 -- common/autotest_common.sh@852 -- # return 0 00:23:50.643 06:16:21 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:23:50.643 06:16:21 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:50.902 Malloc0 00:23:50.902 Malloc1 00:23:50.902 Malloc2 00:23:50.902 06:16:21 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:23:50.902 06:16:21 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:50.902 06:16:21 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:50.902 06:16:21 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:51.160 5000+0 records in 00:23:51.160 5000+0 records out 00:23:51.160 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0365444 s, 280 MB/s 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:51.160 AIO0 00:23:51.160 06:16:21 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 127932 00:23:51.160 06:16:21 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 127932 without_thd 00:23:51.160 06:16:21 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=127932 00:23:51.160 06:16:21 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:23:51.160 06:16:21 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:51.160 06:16:21 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:51.160 06:16:21 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:23:51.419 06:16:21 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:51.419 06:16:21 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:51.419 06:16:21 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:23:51.678 06:16:22 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:51.678 spdk_thread ids are 1 on reactor0. 00:23:51.678 06:16:22 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:51.678 06:16:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:51.678 06:16:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 127932 0 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 127932 0 idle 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:51.678 06:16:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127932 root 20 0 20.1t 145756 28848 S 0.0 1.2 0:00.96 reactor_0' 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # echo 127932 root 20 0 20.1t 145756 28848 S 0.0 1.2 0:00.96 reactor_0 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:51.936 06:16:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:51.936 06:16:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 127932 1 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 127932 1 idle 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127941 root 20 0 20.1t 145756 28848 S 0.0 1.2 0:00.00 reactor_1' 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # echo 127941 root 20 0 20.1t 145756 28848 S 0.0 1.2 0:00.00 reactor_1 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:51.936 06:16:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:51.936 06:16:22 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:51.936 06:16:22 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 127932 2 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 127932 2 idle 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:51.937 06:16:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127942 root 20 0 20.1t 145756 28848 S 0.0 1.2 0:00.00 reactor_2' 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@48 -- # echo 127942 root 20 0 20.1t 145756 28848 S 0.0 1.2 0:00.00 reactor_2 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:52.195 06:16:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:52.195 06:16:22 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:23:52.195 06:16:22 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:23:52.195 06:16:22 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:23:52.454 [2024-06-11 06:16:22.991358] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:52.454 06:16:23 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:52.712 [2024-06-11 06:16:23.239064] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:52.712 [2024-06-11 06:16:23.239744] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:52.712 06:16:23 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:52.971 [2024-06-11 06:16:23.410898] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:52.971 [2024-06-11 06:16:23.411461] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:52.971 06:16:23 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:52.971 06:16:23 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 127932 0 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 127932 0 busy 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127932 root 20 0 20.1t 145864 28848 R 99.9 1.2 0:01.33 reactor_0' 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@48 -- # echo 127932 root 20 0 20.1t 145864 28848 R 99.9 1.2 0:01.33 reactor_0 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:52.971 06:16:23 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:52.971 06:16:23 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 127932 2 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 127932 2 busy 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:52.971 06:16:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127942 root 20 0 20.1t 145864 28848 R 87.5 1.2 0:00.35 reactor_2' 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@48 -- # echo 127942 root 20 0 20.1t 145864 28848 R 87.5 1.2 0:00.35 reactor_2 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=87.5 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=87 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@51 -- # [[ 87 -lt 70 ]] 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:53.229 06:16:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:53.229 06:16:23 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:53.488 [2024-06-11 06:16:24.010694] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:53.488 [2024-06-11 06:16:24.010951] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:53.488 06:16:24 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:23:53.488 06:16:24 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 127932 2 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 127932 2 idle 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:53.488 06:16:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127942 root 20 0 20.1t 145928 28848 S 0.0 1.2 0:00.60 reactor_2' 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@48 -- # echo 127942 root 20 0 20.1t 145928 28848 S 0.0 1.2 0:00.60 reactor_2 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:53.747 06:16:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:53.747 06:16:24 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:54.005 [2024-06-11 06:16:24.434953] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:54.005 [2024-06-11 06:16:24.435411] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:54.005 06:16:24 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:23:54.005 06:16:24 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:23:54.005 06:16:24 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:23:54.264 [2024-06-11 06:16:24.691373] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:54.264 06:16:24 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 127932 0 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 127932 0 idle 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@33 -- # local pid=127932 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 127932 -w 256 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 127932 root 20 0 20.1t 146020 28848 S 0.0 1.2 0:02.16 reactor_0' 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@48 -- # echo 127932 root 20 0 20.1t 146020 28848 S 0.0 1.2 0:02.16 reactor_0 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:54.264 06:16:24 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:54.264 06:16:24 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:23:54.264 06:16:24 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:23:54.264 06:16:24 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:23:54.264 06:16:24 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 127932 00:23:54.264 06:16:24 -- common/autotest_common.sh@926 -- # '[' -z 127932 ']' 00:23:54.264 06:16:24 -- common/autotest_common.sh@930 -- # kill -0 127932 00:23:54.264 06:16:24 -- common/autotest_common.sh@931 -- # uname 00:23:54.264 06:16:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:54.264 06:16:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127932 00:23:54.523 06:16:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:54.523 06:16:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:54.523 killing process with pid 127932 00:23:54.523 06:16:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127932' 00:23:54.523 06:16:24 -- common/autotest_common.sh@945 -- # kill 127932 00:23:54.523 06:16:24 -- common/autotest_common.sh@950 -- # wait 127932 00:23:56.427 06:16:26 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:23:56.427 06:16:26 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=128088 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:56.427 06:16:26 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 128088 /var/tmp/spdk.sock 00:23:56.427 06:16:26 -- common/autotest_common.sh@819 -- # '[' -z 128088 ']' 00:23:56.427 06:16:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.427 06:16:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:56.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.427 06:16:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.427 06:16:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:56.427 06:16:26 -- common/autotest_common.sh@10 -- # set +x 00:23:56.427 [2024-06-11 06:16:26.631646] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:23:56.427 [2024-06-11 06:16:26.631794] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128088 ] 00:23:56.427 [2024-06-11 06:16:26.802512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:56.427 [2024-06-11 06:16:27.038297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.427 [2024-06-11 06:16:27.038494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.427 [2024-06-11 06:16:27.038498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.996 [2024-06-11 06:16:27.426355] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:56.996 06:16:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:56.996 06:16:27 -- common/autotest_common.sh@852 -- # return 0 00:23:56.996 06:16:27 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:23:56.996 06:16:27 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:57.255 Malloc0 00:23:57.255 Malloc1 00:23:57.255 Malloc2 00:23:57.255 06:16:27 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:23:57.255 06:16:27 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:23:57.255 06:16:27 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:23:57.255 06:16:27 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:23:57.255 5000+0 records in 00:23:57.255 5000+0 records out 00:23:57.255 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0281901 s, 363 MB/s 00:23:57.255 06:16:27 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:23:57.823 AIO0 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 128088 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 128088 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=128088 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:23:57.823 06:16:28 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:23:57.823 06:16:28 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:23:58.082 06:16:28 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:23:58.082 spdk_thread ids are 1 on reactor0. 00:23:58.082 06:16:28 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:23:58.082 06:16:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:58.082 06:16:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 128088 0 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 128088 0 idle 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:23:58.082 06:16:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128088 root 20 0 20.1t 145892 29088 S 6.7 1.2 0:00.91 reactor_0' 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # echo 128088 root 20 0 20.1t 145892 29088 S 6.7 1.2 0:00.91 reactor_0 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:58.341 06:16:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:58.341 06:16:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 128088 1 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 128088 1 idle 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128091 root 20 0 20.1t 145892 29088 S 0.0 1.2 0:00.00 reactor_1' 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # echo 128091 root 20 0 20.1t 145892 29088 S 0.0 1.2 0:00.00 reactor_1 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:58.341 06:16:28 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:23:58.341 06:16:28 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 128088 2 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 128088 2 idle 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:23:58.341 06:16:28 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128092 root 20 0 20.1t 145892 29088 S 0.0 1.2 0:00.00 reactor_2' 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@48 -- # echo 128092 root 20 0 20.1t 145892 29088 S 0.0 1.2 0:00.00 reactor_2 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:58.601 06:16:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:58.601 06:16:29 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:23:58.601 06:16:29 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:23:58.860 [2024-06-11 06:16:29.390854] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:23:58.860 [2024-06-11 06:16:29.391503] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:23:58.860 [2024-06-11 06:16:29.391964] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:58.860 06:16:29 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:23:59.119 [2024-06-11 06:16:29.646576] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:23:59.119 [2024-06-11 06:16:29.647308] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:59.119 06:16:29 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:59.119 06:16:29 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 128088 0 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 128088 0 busy 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:23:59.119 06:16:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128088 root 20 0 20.1t 145972 29088 R 99.9 1.2 0:01.34 reactor_0' 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@48 -- # echo 128088 root 20 0 20.1t 145972 29088 R 99.9 1.2 0:01.34 reactor_0 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:59.377 06:16:29 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:23:59.377 06:16:29 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 128088 2 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 128088 2 busy 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:23:59.377 06:16:29 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128092 root 20 0 20.1t 145972 29088 R 99.9 1.2 0:00.35 reactor_2' 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@48 -- # echo 128092 root 20 0 20.1t 145972 29088 R 99.9 1.2 0:00.35 reactor_2 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:23:59.377 06:16:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:59.377 06:16:30 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:23:59.637 [2024-06-11 06:16:30.182724] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:23:59.637 [2024-06-11 06:16:30.182916] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:23:59.637 06:16:30 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:23:59.637 06:16:30 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 128088 2 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 128088 2 idle 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:23:59.637 06:16:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128092 root 20 0 20.1t 146040 29088 S 0.0 1.2 0:00.53 reactor_2' 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@48 -- # echo 128092 root 20 0 20.1t 146040 29088 S 0.0 1.2 0:00.53 reactor_2 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:23:59.896 06:16:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:23:59.896 06:16:30 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:23:59.896 [2024-06-11 06:16:30.534771] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:23:59.896 [2024-06-11 06:16:30.535144] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:23:59.896 [2024-06-11 06:16:30.535185] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:00.155 06:16:30 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:00.155 06:16:30 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 128088 0 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 128088 0 idle 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@33 -- # local pid=128088 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 128088 -w 256 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 128088 root 20 0 20.1t 146084 29088 S 0.0 1.2 0:02.06 reactor_0' 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@48 -- # echo 128088 root 20 0 20.1t 146084 29088 S 0.0 1.2 0:02.06 reactor_0 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:00.155 06:16:30 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:00.155 06:16:30 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:00.155 06:16:30 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:00.155 06:16:30 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:00.155 06:16:30 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 128088 00:24:00.155 06:16:30 -- common/autotest_common.sh@926 -- # '[' -z 128088 ']' 00:24:00.155 06:16:30 -- common/autotest_common.sh@930 -- # kill -0 128088 00:24:00.155 06:16:30 -- common/autotest_common.sh@931 -- # uname 00:24:00.155 06:16:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:00.155 06:16:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128088 00:24:00.155 06:16:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:00.155 06:16:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:00.155 06:16:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128088' 00:24:00.155 killing process with pid 128088 00:24:00.155 06:16:30 -- common/autotest_common.sh@945 -- # kill 128088 00:24:00.155 06:16:30 -- common/autotest_common.sh@950 -- # wait 128088 00:24:02.064 06:16:32 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:02.064 ************************************ 00:24:02.064 END TEST reactor_set_interrupt 00:24:02.064 ************************************ 00:24:02.064 00:24:02.064 real 0m12.608s 00:24:02.064 user 0m12.575s 00:24:02.064 sys 0m2.285s 00:24:02.064 06:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.064 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.064 06:16:32 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:02.064 06:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:02.064 06:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:02.064 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.064 ************************************ 00:24:02.064 START TEST reap_unregistered_poller 00:24:02.064 ************************************ 00:24:02.064 06:16:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:02.064 * Looking for test storage... 00:24:02.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.064 06:16:32 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:02.064 06:16:32 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:02.064 06:16:32 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:02.064 06:16:32 -- common/autotest_common.sh@34 -- # set -e 00:24:02.064 06:16:32 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:02.064 06:16:32 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:02.064 06:16:32 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:02.064 06:16:32 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:02.064 06:16:32 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:02.064 06:16:32 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:02.064 06:16:32 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:02.064 06:16:32 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:02.064 06:16:32 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:02.064 06:16:32 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:02.064 06:16:32 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:02.064 06:16:32 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:02.064 06:16:32 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:02.064 06:16:32 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:02.064 06:16:32 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:02.064 06:16:32 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:02.064 06:16:32 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:02.064 06:16:32 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:02.064 06:16:32 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:02.064 06:16:32 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:02.064 06:16:32 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:02.064 06:16:32 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:02.064 06:16:32 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:02.064 06:16:32 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:02.064 06:16:32 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:02.064 06:16:32 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:02.064 06:16:32 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:02.064 06:16:32 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:24:02.064 06:16:32 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:02.064 06:16:32 -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:24:02.064 06:16:32 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:02.064 06:16:32 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:02.064 06:16:32 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:02.064 06:16:32 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:02.064 06:16:32 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:02.064 06:16:32 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:02.064 06:16:32 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:02.064 06:16:32 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:02.064 06:16:32 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:02.064 06:16:32 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:02.064 06:16:32 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:02.064 06:16:32 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:02.064 06:16:32 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:02.064 06:16:32 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:02.064 06:16:32 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:02.064 06:16:32 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:02.064 06:16:32 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:02.064 06:16:32 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:02.064 06:16:32 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:02.064 06:16:32 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:02.064 06:16:32 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:02.064 06:16:32 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:02.064 06:16:32 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:02.064 06:16:32 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:02.064 06:16:32 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:02.064 06:16:32 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:02.064 06:16:32 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:02.064 06:16:32 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:24:02.064 06:16:32 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:02.064 06:16:32 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:02.064 06:16:32 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:02.064 06:16:32 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:02.064 06:16:32 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:02.064 06:16:32 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:02.064 06:16:32 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:02.064 06:16:32 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:02.064 06:16:32 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:02.064 06:16:32 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:24:02.064 06:16:32 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:02.064 06:16:32 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:02.064 06:16:32 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:02.064 06:16:32 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:02.064 06:16:32 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:02.064 06:16:32 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:02.064 06:16:32 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:02.064 06:16:32 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:02.064 06:16:32 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:02.064 06:16:32 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:02.065 06:16:32 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:02.065 06:16:32 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:02.065 06:16:32 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:02.065 06:16:32 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:02.065 06:16:32 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:02.065 06:16:32 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:02.065 06:16:32 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:02.065 06:16:32 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:02.065 06:16:32 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:02.065 06:16:32 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:02.065 06:16:32 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:02.065 06:16:32 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:02.065 06:16:32 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:02.065 06:16:32 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:02.065 06:16:32 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:02.065 #define SPDK_CONFIG_H 00:24:02.065 #define SPDK_CONFIG_APPS 1 00:24:02.065 #define SPDK_CONFIG_ARCH native 00:24:02.065 #define SPDK_CONFIG_ASAN 1 00:24:02.065 #undef SPDK_CONFIG_AVAHI 00:24:02.065 #undef SPDK_CONFIG_CET 00:24:02.065 #define SPDK_CONFIG_COVERAGE 1 00:24:02.065 #define SPDK_CONFIG_CROSS_PREFIX 00:24:02.065 #undef SPDK_CONFIG_CRYPTO 00:24:02.065 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:02.065 #undef SPDK_CONFIG_CUSTOMOCF 00:24:02.065 #undef SPDK_CONFIG_DAOS 00:24:02.065 #define SPDK_CONFIG_DAOS_DIR 00:24:02.065 #define SPDK_CONFIG_DEBUG 1 00:24:02.065 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:02.065 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:02.065 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:02.065 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:02.065 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:02.065 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:02.065 #define SPDK_CONFIG_EXAMPLES 1 00:24:02.065 #undef SPDK_CONFIG_FC 00:24:02.065 #define SPDK_CONFIG_FC_PATH 00:24:02.065 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:02.065 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:02.065 #undef SPDK_CONFIG_FUSE 00:24:02.065 #undef SPDK_CONFIG_FUZZER 00:24:02.065 #define SPDK_CONFIG_FUZZER_LIB 00:24:02.065 #undef SPDK_CONFIG_GOLANG 00:24:02.065 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:24:02.065 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:02.065 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:02.065 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:02.065 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:02.065 #define SPDK_CONFIG_IDXD 1 00:24:02.065 #undef SPDK_CONFIG_IDXD_KERNEL 00:24:02.065 #undef SPDK_CONFIG_IPSEC_MB 00:24:02.065 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:02.065 #define SPDK_CONFIG_ISAL 1 00:24:02.065 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:02.065 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:02.065 #define SPDK_CONFIG_LIBDIR 00:24:02.065 #undef SPDK_CONFIG_LTO 00:24:02.065 #define SPDK_CONFIG_MAX_LCORES 00:24:02.065 #define SPDK_CONFIG_NVME_CUSE 1 00:24:02.065 #undef SPDK_CONFIG_OCF 00:24:02.065 #define SPDK_CONFIG_OCF_PATH 00:24:02.065 #define SPDK_CONFIG_OPENSSL_PATH 00:24:02.065 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:02.065 #undef SPDK_CONFIG_PGO_USE 00:24:02.065 #define SPDK_CONFIG_PREFIX /usr/local 00:24:02.065 #undef SPDK_CONFIG_RAID5F 00:24:02.065 #undef SPDK_CONFIG_RBD 00:24:02.065 #define SPDK_CONFIG_RDMA 1 00:24:02.065 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:02.065 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:02.065 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:02.065 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:02.065 #undef SPDK_CONFIG_SHARED 00:24:02.065 #undef SPDK_CONFIG_SMA 00:24:02.065 #define SPDK_CONFIG_TESTS 1 00:24:02.065 #undef SPDK_CONFIG_TSAN 00:24:02.065 #undef SPDK_CONFIG_UBLK 00:24:02.065 #define SPDK_CONFIG_UBSAN 1 00:24:02.065 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:02.065 #undef SPDK_CONFIG_URING 00:24:02.065 #define SPDK_CONFIG_URING_PATH 00:24:02.065 #undef SPDK_CONFIG_URING_ZNS 00:24:02.065 #undef SPDK_CONFIG_USDT 00:24:02.065 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:02.065 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:02.065 #undef SPDK_CONFIG_VFIO_USER 00:24:02.065 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:02.065 #define SPDK_CONFIG_VHOST 1 00:24:02.065 #define SPDK_CONFIG_VIRTIO 1 00:24:02.065 #undef SPDK_CONFIG_VTUNE 00:24:02.065 #define SPDK_CONFIG_VTUNE_DIR 00:24:02.065 #define SPDK_CONFIG_WERROR 1 00:24:02.065 #define SPDK_CONFIG_WPDK_DIR 00:24:02.065 #undef SPDK_CONFIG_XNVME 00:24:02.065 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:02.065 06:16:32 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:02.065 06:16:32 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:02.065 06:16:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.065 06:16:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.065 06:16:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.065 06:16:32 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.065 06:16:32 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.065 06:16:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.065 06:16:32 -- paths/export.sh@5 -- # export PATH 00:24:02.065 06:16:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:02.065 06:16:32 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:02.065 06:16:32 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:02.065 06:16:32 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:02.065 06:16:32 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:02.065 06:16:32 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:02.065 06:16:32 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:02.065 06:16:32 -- pm/common@16 -- # TEST_TAG=N/A 00:24:02.065 06:16:32 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:02.065 06:16:32 -- common/autotest_common.sh@52 -- # : 1 00:24:02.065 06:16:32 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:02.065 06:16:32 -- common/autotest_common.sh@56 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:02.065 06:16:32 -- common/autotest_common.sh@58 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:02.065 06:16:32 -- common/autotest_common.sh@60 -- # : 1 00:24:02.065 06:16:32 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:02.065 06:16:32 -- common/autotest_common.sh@62 -- # : 1 00:24:02.065 06:16:32 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:02.065 06:16:32 -- common/autotest_common.sh@64 -- # : 00:24:02.065 06:16:32 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:02.065 06:16:32 -- common/autotest_common.sh@66 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:02.065 06:16:32 -- common/autotest_common.sh@68 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:02.065 06:16:32 -- common/autotest_common.sh@70 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:02.065 06:16:32 -- common/autotest_common.sh@72 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:02.065 06:16:32 -- common/autotest_common.sh@74 -- # : 1 00:24:02.065 06:16:32 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:02.065 06:16:32 -- common/autotest_common.sh@76 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:02.065 06:16:32 -- common/autotest_common.sh@78 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:02.065 06:16:32 -- common/autotest_common.sh@80 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:02.065 06:16:32 -- common/autotest_common.sh@82 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:02.065 06:16:32 -- common/autotest_common.sh@84 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:02.065 06:16:32 -- common/autotest_common.sh@86 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:02.065 06:16:32 -- common/autotest_common.sh@88 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:02.065 06:16:32 -- common/autotest_common.sh@90 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:02.065 06:16:32 -- common/autotest_common.sh@92 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:02.065 06:16:32 -- common/autotest_common.sh@94 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:02.065 06:16:32 -- common/autotest_common.sh@96 -- # : rdma 00:24:02.065 06:16:32 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:02.065 06:16:32 -- common/autotest_common.sh@98 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:02.065 06:16:32 -- common/autotest_common.sh@100 -- # : 0 00:24:02.065 06:16:32 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:02.066 06:16:32 -- common/autotest_common.sh@102 -- # : 1 00:24:02.066 06:16:32 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:02.066 06:16:32 -- common/autotest_common.sh@104 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:02.066 06:16:32 -- common/autotest_common.sh@106 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:02.066 06:16:32 -- common/autotest_common.sh@108 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:02.066 06:16:32 -- common/autotest_common.sh@110 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:02.066 06:16:32 -- common/autotest_common.sh@112 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:02.066 06:16:32 -- common/autotest_common.sh@114 -- # : 1 00:24:02.066 06:16:32 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:02.066 06:16:32 -- common/autotest_common.sh@116 -- # : 1 00:24:02.066 06:16:32 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:02.066 06:16:32 -- common/autotest_common.sh@118 -- # : 00:24:02.066 06:16:32 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:02.066 06:16:32 -- common/autotest_common.sh@120 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:02.066 06:16:32 -- common/autotest_common.sh@122 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:02.066 06:16:32 -- common/autotest_common.sh@124 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:02.066 06:16:32 -- common/autotest_common.sh@126 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:02.066 06:16:32 -- common/autotest_common.sh@128 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:02.066 06:16:32 -- common/autotest_common.sh@130 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:02.066 06:16:32 -- common/autotest_common.sh@132 -- # : 00:24:02.066 06:16:32 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:02.066 06:16:32 -- common/autotest_common.sh@134 -- # : true 00:24:02.066 06:16:32 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:02.066 06:16:32 -- common/autotest_common.sh@136 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:02.066 06:16:32 -- common/autotest_common.sh@138 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:02.066 06:16:32 -- common/autotest_common.sh@140 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:02.066 06:16:32 -- common/autotest_common.sh@142 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:02.066 06:16:32 -- common/autotest_common.sh@144 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:02.066 06:16:32 -- common/autotest_common.sh@146 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:02.066 06:16:32 -- common/autotest_common.sh@148 -- # : 00:24:02.066 06:16:32 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:02.066 06:16:32 -- common/autotest_common.sh@150 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:02.066 06:16:32 -- common/autotest_common.sh@152 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:02.066 06:16:32 -- common/autotest_common.sh@154 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:02.066 06:16:32 -- common/autotest_common.sh@156 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:02.066 06:16:32 -- common/autotest_common.sh@158 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:02.066 06:16:32 -- common/autotest_common.sh@160 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:02.066 06:16:32 -- common/autotest_common.sh@163 -- # : 00:24:02.066 06:16:32 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:02.066 06:16:32 -- common/autotest_common.sh@165 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:02.066 06:16:32 -- common/autotest_common.sh@167 -- # : 0 00:24:02.066 06:16:32 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:02.066 06:16:32 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:02.066 06:16:32 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:02.066 06:16:32 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:02.066 06:16:32 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:02.066 06:16:32 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:02.066 06:16:32 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:02.066 06:16:32 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:02.066 06:16:32 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:02.066 06:16:32 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:02.066 06:16:32 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:02.066 06:16:32 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:02.066 06:16:32 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:02.066 06:16:32 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:02.066 06:16:32 -- common/autotest_common.sh@196 -- # cat 00:24:02.066 06:16:32 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:02.066 06:16:32 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:02.066 06:16:32 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:02.066 06:16:32 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:02.066 06:16:32 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:02.066 06:16:32 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:02.066 06:16:32 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:02.066 06:16:32 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:02.066 06:16:32 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:02.066 06:16:32 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:02.066 06:16:32 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:02.066 06:16:32 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:02.066 06:16:32 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:02.066 06:16:32 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:02.066 06:16:32 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:02.066 06:16:32 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:02.066 06:16:32 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:02.066 06:16:32 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:02.066 06:16:32 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:02.066 06:16:32 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:24:02.066 06:16:32 -- common/autotest_common.sh@249 -- # export valgrind= 00:24:02.066 06:16:32 -- common/autotest_common.sh@249 -- # valgrind= 00:24:02.066 06:16:32 -- common/autotest_common.sh@255 -- # uname -s 00:24:02.066 06:16:32 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:24:02.066 06:16:32 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:24:02.066 06:16:32 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:24:02.066 06:16:32 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:24:02.066 06:16:32 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:02.066 06:16:32 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:24:02.066 06:16:32 -- common/autotest_common.sh@265 -- # MAKE=make 00:24:02.066 06:16:32 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:24:02.066 06:16:32 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:24:02.066 06:16:32 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:24:02.066 06:16:32 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:02.066 06:16:32 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:24:02.066 06:16:32 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:24:02.066 06:16:32 -- common/autotest_common.sh@309 -- # [[ -z 128271 ]] 00:24:02.066 06:16:32 -- common/autotest_common.sh@309 -- # kill -0 128271 00:24:02.066 06:16:32 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:24:02.066 06:16:32 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:24:02.066 06:16:32 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:24:02.067 06:16:32 -- common/autotest_common.sh@322 -- # local mount target_dir 00:24:02.067 06:16:32 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:24:02.067 06:16:32 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:24:02.067 06:16:32 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:24:02.067 06:16:32 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:24:02.067 06:16:32 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.LuSW8M 00:24:02.067 06:16:32 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:02.067 06:16:32 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.LuSW8M/tests/interrupt /tmp/spdk.LuSW8M 00:24:02.067 06:16:32 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@318 -- # df -T 00:24:02.067 06:16:32 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=1248956416 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253683200 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=4726784 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=10273636352 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=10326380544 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=6265810944 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6268403712 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=103061504 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109395968 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253675008 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253679104 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:24:02.067 06:16:32 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # avails["$mount"]=97224732672 00:24:02.067 06:16:32 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:24:02.067 06:16:32 -- common/autotest_common.sh@354 -- # uses["$mount"]=2478047232 00:24:02.067 06:16:32 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:24:02.067 06:16:32 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:24:02.067 * Looking for test storage... 00:24:02.067 06:16:32 -- common/autotest_common.sh@359 -- # local target_space new_size 00:24:02.067 06:16:32 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:24:02.067 06:16:32 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.067 06:16:32 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:02.067 06:16:32 -- common/autotest_common.sh@363 -- # mount=/ 00:24:02.067 06:16:32 -- common/autotest_common.sh@365 -- # target_space=10273636352 00:24:02.067 06:16:32 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:24:02.067 06:16:32 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:24:02.067 06:16:32 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@372 -- # new_size=12540973056 00:24:02.067 06:16:32 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:02.067 06:16:32 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.067 06:16:32 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.067 06:16:32 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:02.067 06:16:32 -- common/autotest_common.sh@380 -- # return 0 00:24:02.067 06:16:32 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:24:02.067 06:16:32 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:24:02.067 06:16:32 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:02.067 06:16:32 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:02.067 06:16:32 -- common/autotest_common.sh@1672 -- # true 00:24:02.067 06:16:32 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:24:02.067 06:16:32 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:02.067 06:16:32 -- common/autotest_common.sh@27 -- # exec 00:24:02.067 06:16:32 -- common/autotest_common.sh@29 -- # exec 00:24:02.067 06:16:32 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:02.067 06:16:32 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:02.067 06:16:32 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:02.067 06:16:32 -- common/autotest_common.sh@18 -- # set -x 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:02.067 06:16:32 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:02.067 06:16:32 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:02.067 06:16:32 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=128312 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 128312 /var/tmp/spdk.sock 00:24:02.067 06:16:32 -- common/autotest_common.sh@819 -- # '[' -z 128312 ']' 00:24:02.067 06:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.067 06:16:32 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:02.067 06:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:02.067 06:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.067 06:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:02.067 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:24:02.327 [2024-06-11 06:16:32.772116] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:02.327 [2024-06-11 06:16:32.773121] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128312 ] 00:24:02.586 [2024-06-11 06:16:32.979123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:02.586 [2024-06-11 06:16:33.220234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.586 [2024-06-11 06:16:33.220431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.586 [2024-06-11 06:16:33.220432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:03.190 [2024-06-11 06:16:33.614479] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:03.190 06:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:03.190 06:16:33 -- common/autotest_common.sh@852 -- # return 0 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:03.190 06:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:03.190 06:16:33 -- common/autotest_common.sh@10 -- # set +x 00:24:03.190 06:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:03.190 "name": "app_thread", 00:24:03.190 "id": 1, 00:24:03.190 "active_pollers": [], 00:24:03.190 "timed_pollers": [ 00:24:03.190 { 00:24:03.190 "name": "rpc_subsystem_poll", 00:24:03.190 "id": 1, 00:24:03.190 "state": "waiting", 00:24:03.190 "run_count": 0, 00:24:03.190 "busy_count": 0, 00:24:03.190 "period_ticks": 8400000 00:24:03.190 } 00:24:03.190 ], 00:24:03.190 "paused_pollers": [] 00:24:03.190 }' 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:03.190 06:16:33 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:03.449 06:16:33 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:24:03.449 06:16:33 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:03.449 06:16:33 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:03.449 06:16:33 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:03.449 06:16:33 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:03.449 5000+0 records in 00:24:03.449 5000+0 records out 00:24:03.449 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0269127 s, 380 MB/s 00:24:03.449 06:16:33 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:03.708 AIO0 00:24:03.708 06:16:34 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:03.708 06:16:34 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:03.967 06:16:34 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:03.967 06:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:03.967 06:16:34 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:03.967 06:16:34 -- common/autotest_common.sh@10 -- # set +x 00:24:03.967 06:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:03.967 06:16:34 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:03.967 "name": "app_thread", 00:24:03.967 "id": 1, 00:24:03.967 "active_pollers": [], 00:24:03.967 "timed_pollers": [ 00:24:03.967 { 00:24:03.967 "name": "rpc_subsystem_poll", 00:24:03.967 "id": 1, 00:24:03.967 "state": "waiting", 00:24:03.967 "run_count": 0, 00:24:03.967 "busy_count": 0, 00:24:03.967 "period_ticks": 8400000 00:24:03.967 } 00:24:03.967 ], 00:24:03.967 "paused_pollers": [] 00:24:03.967 }' 00:24:03.967 06:16:34 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:03.967 06:16:34 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:03.968 06:16:34 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:03.968 06:16:34 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:03.968 06:16:34 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:24:03.968 06:16:34 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:24:03.968 06:16:34 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:03.968 06:16:34 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 128312 00:24:03.968 06:16:34 -- common/autotest_common.sh@926 -- # '[' -z 128312 ']' 00:24:03.968 06:16:34 -- common/autotest_common.sh@930 -- # kill -0 128312 00:24:03.968 06:16:34 -- common/autotest_common.sh@931 -- # uname 00:24:03.968 06:16:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.968 06:16:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128312 00:24:03.968 06:16:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:03.968 06:16:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:03.968 killing process with pid 128312 00:24:03.968 06:16:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128312' 00:24:03.968 06:16:34 -- common/autotest_common.sh@945 -- # kill 128312 00:24:03.968 06:16:34 -- common/autotest_common.sh@950 -- # wait 128312 00:24:05.347 06:16:35 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:05.347 06:16:35 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:05.607 ************************************ 00:24:05.607 END TEST reap_unregistered_poller 00:24:05.607 ************************************ 00:24:05.607 00:24:05.607 real 0m3.527s 00:24:05.607 user 0m3.063s 00:24:05.607 sys 0m0.683s 00:24:05.607 06:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.607 06:16:35 -- common/autotest_common.sh@10 -- # set +x 00:24:05.607 06:16:36 -- spdk/autotest.sh@204 -- # uname -s 00:24:05.607 06:16:36 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:24:05.607 06:16:36 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:24:05.607 06:16:36 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:24:05.607 06:16:36 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:05.607 06:16:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:05.607 06:16:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.607 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:24:05.607 ************************************ 00:24:05.607 START TEST spdk_dd 00:24:05.607 ************************************ 00:24:05.607 06:16:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:05.607 * Looking for test storage... 00:24:05.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:05.607 06:16:36 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.607 06:16:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.607 06:16:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.607 06:16:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.607 06:16:36 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:05.607 06:16:36 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:05.607 06:16:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:05.607 06:16:36 -- paths/export.sh@5 -- # export PATH 00:24:05.607 06:16:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:05.607 06:16:36 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:06.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:24:06.174 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:08.082 06:16:38 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:24:08.082 06:16:38 -- dd/dd.sh@11 -- # nvme_in_userspace 00:24:08.082 06:16:38 -- scripts/common.sh@311 -- # local bdf bdfs 00:24:08.082 06:16:38 -- scripts/common.sh@312 -- # local nvmes 00:24:08.082 06:16:38 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:24:08.082 06:16:38 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:08.082 06:16:38 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:24:08.082 06:16:38 -- scripts/common.sh@297 -- # local bdf= 00:24:08.082 06:16:38 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:24:08.082 06:16:38 -- scripts/common.sh@232 -- # local class 00:24:08.082 06:16:38 -- scripts/common.sh@233 -- # local subclass 00:24:08.082 06:16:38 -- scripts/common.sh@234 -- # local progif 00:24:08.082 06:16:38 -- scripts/common.sh@235 -- # printf %02x 1 00:24:08.082 06:16:38 -- scripts/common.sh@235 -- # class=01 00:24:08.082 06:16:38 -- scripts/common.sh@236 -- # printf %02x 8 00:24:08.082 06:16:38 -- scripts/common.sh@236 -- # subclass=08 00:24:08.082 06:16:38 -- scripts/common.sh@237 -- # printf %02x 2 00:24:08.082 06:16:38 -- scripts/common.sh@237 -- # progif=02 00:24:08.082 06:16:38 -- scripts/common.sh@239 -- # hash lspci 00:24:08.082 06:16:38 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:24:08.082 06:16:38 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:24:08.082 06:16:38 -- scripts/common.sh@242 -- # grep -i -- -p02 00:24:08.082 06:16:38 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:08.082 06:16:38 -- scripts/common.sh@244 -- # tr -d '"' 00:24:08.082 06:16:38 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:08.082 06:16:38 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:24:08.082 06:16:38 -- scripts/common.sh@15 -- # local i 00:24:08.082 06:16:38 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:24:08.082 06:16:38 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:08.082 06:16:38 -- scripts/common.sh@24 -- # return 0 00:24:08.082 06:16:38 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:24:08.082 06:16:38 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:24:08.082 06:16:38 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:24:08.082 06:16:38 -- scripts/common.sh@322 -- # uname -s 00:24:08.082 06:16:38 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:24:08.082 06:16:38 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:24:08.082 06:16:38 -- scripts/common.sh@327 -- # (( 1 )) 00:24:08.082 06:16:38 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:24:08.082 06:16:38 -- dd/dd.sh@13 -- # check_liburing 00:24:08.082 06:16:38 -- dd/common.sh@139 -- # local lib so 00:24:08.082 06:16:38 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:24:08.082 06:16:38 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:24:08.082 06:16:38 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:08.082 06:16:38 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:24:08.082 06:16:38 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:08.082 06:16:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:08.082 06:16:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:08.082 06:16:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 ************************************ 00:24:08.082 START TEST spdk_dd_basic_rw 00:24:08.082 ************************************ 00:24:08.082 06:16:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:08.082 * Looking for test storage... 00:24:08.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:08.082 06:16:38 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:08.082 06:16:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:08.082 06:16:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:08.082 06:16:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:08.082 06:16:38 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:08.082 06:16:38 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:08.082 06:16:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:08.082 06:16:38 -- paths/export.sh@5 -- # export PATH 00:24:08.082 06:16:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:08.082 06:16:38 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:24:08.082 06:16:38 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:24:08.082 06:16:38 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:24:08.082 06:16:38 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:24:08.082 06:16:38 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:24:08.082 06:16:38 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:08.082 06:16:38 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:24:08.082 06:16:38 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:08.082 06:16:38 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:08.082 06:16:38 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:24:08.082 06:16:38 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:24:08.082 06:16:38 -- dd/common.sh@126 -- # mapfile -t id 00:24:08.082 06:16:38 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:24:08.344 06:16:38 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2311 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:24:08.344 06:16:38 -- dd/common.sh@130 -- # lbaf=04 00:24:08.345 06:16:38 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2311 Host Write Commands: 110 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:24:08.345 06:16:38 -- dd/common.sh@132 -- # lbaf=4096 00:24:08.345 06:16:38 -- dd/common.sh@134 -- # echo 4096 00:24:08.345 06:16:38 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:24:08.345 06:16:38 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:08.345 06:16:38 -- dd/basic_rw.sh@96 -- # gen_conf 00:24:08.345 06:16:38 -- dd/basic_rw.sh@96 -- # : 00:24:08.345 06:16:38 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:24:08.345 06:16:38 -- dd/common.sh@31 -- # xtrace_disable 00:24:08.345 06:16:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:08.345 06:16:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.345 06:16:38 -- common/autotest_common.sh@10 -- # set +x 00:24:08.345 ************************************ 00:24:08.345 START TEST dd_bs_lt_native_bs 00:24:08.345 ************************************ 00:24:08.345 06:16:38 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:08.345 06:16:38 -- common/autotest_common.sh@640 -- # local es=0 00:24:08.345 06:16:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:08.345 06:16:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:08.345 06:16:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:08.345 06:16:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:08.345 06:16:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:08.345 06:16:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:08.345 06:16:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:08.345 06:16:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:08.345 06:16:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:08.345 06:16:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:08.345 { 00:24:08.345 "subsystems": [ 00:24:08.345 { 00:24:08.345 "subsystem": "bdev", 00:24:08.345 "config": [ 00:24:08.345 { 00:24:08.345 "params": { 00:24:08.345 "trtype": "pcie", 00:24:08.345 "traddr": "0000:00:06.0", 00:24:08.345 "name": "Nvme0" 00:24:08.345 }, 00:24:08.345 "method": "bdev_nvme_attach_controller" 00:24:08.345 }, 00:24:08.345 { 00:24:08.345 "method": "bdev_wait_for_examine" 00:24:08.345 } 00:24:08.345 ] 00:24:08.345 } 00:24:08.345 ] 00:24:08.345 } 00:24:08.345 [2024-06-11 06:16:38.898596] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:08.345 [2024-06-11 06:16:38.898965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128636 ] 00:24:08.605 [2024-06-11 06:16:39.081668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.864 [2024-06-11 06:16:39.311204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.432 [2024-06-11 06:16:39.776196] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:24:09.432 [2024-06-11 06:16:39.776536] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:10.377 [2024-06-11 06:16:40.682740] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:10.636 06:16:41 -- common/autotest_common.sh@643 -- # es=234 00:24:10.636 06:16:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:10.636 06:16:41 -- common/autotest_common.sh@652 -- # es=106 00:24:10.636 06:16:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:24:10.636 06:16:41 -- common/autotest_common.sh@660 -- # es=1 00:24:10.636 06:16:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:10.636 00:24:10.636 real 0m2.389s 00:24:10.636 user 0m1.965s 00:24:10.636 sys 0m0.380s 00:24:10.636 06:16:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.636 ************************************ 00:24:10.636 END TEST dd_bs_lt_native_bs 00:24:10.636 ************************************ 00:24:10.636 06:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:10.636 06:16:41 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:24:10.636 06:16:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:10.636 06:16:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:10.636 06:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:10.636 ************************************ 00:24:10.636 START TEST dd_rw 00:24:10.636 ************************************ 00:24:10.636 06:16:41 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:24:10.636 06:16:41 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:24:10.636 06:16:41 -- dd/basic_rw.sh@12 -- # local count size 00:24:10.636 06:16:41 -- dd/basic_rw.sh@13 -- # local qds bss 00:24:10.636 06:16:41 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:24:10.636 06:16:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:10.636 06:16:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:10.636 06:16:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:10.636 06:16:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:10.636 06:16:41 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:10.636 06:16:41 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:10.636 06:16:41 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:10.636 06:16:41 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:10.636 06:16:41 -- dd/basic_rw.sh@23 -- # count=15 00:24:10.636 06:16:41 -- dd/basic_rw.sh@24 -- # count=15 00:24:10.636 06:16:41 -- dd/basic_rw.sh@25 -- # size=61440 00:24:10.636 06:16:41 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:24:10.636 06:16:41 -- dd/common.sh@98 -- # xtrace_disable 00:24:10.636 06:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.203 06:16:41 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:24:11.203 06:16:41 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:11.203 06:16:41 -- dd/common.sh@31 -- # xtrace_disable 00:24:11.203 06:16:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.462 { 00:24:11.462 "subsystems": [ 00:24:11.462 { 00:24:11.462 "subsystem": "bdev", 00:24:11.462 "config": [ 00:24:11.462 { 00:24:11.462 "params": { 00:24:11.462 "trtype": "pcie", 00:24:11.462 "traddr": "0000:00:06.0", 00:24:11.462 "name": "Nvme0" 00:24:11.462 }, 00:24:11.462 "method": "bdev_nvme_attach_controller" 00:24:11.462 }, 00:24:11.462 { 00:24:11.462 "method": "bdev_wait_for_examine" 00:24:11.462 } 00:24:11.462 ] 00:24:11.462 } 00:24:11.462 ] 00:24:11.462 } 00:24:11.462 [2024-06-11 06:16:41.865387] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:11.462 [2024-06-11 06:16:41.865529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128697 ] 00:24:11.462 [2024-06-11 06:16:42.029244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.720 [2024-06-11 06:16:42.263731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.664  Copying: 60/60 [kB] (average 19 MBps) 00:24:13.664 00:24:13.664 06:16:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:24:13.664 06:16:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:13.664 06:16:44 -- dd/common.sh@31 -- # xtrace_disable 00:24:13.664 06:16:44 -- common/autotest_common.sh@10 -- # set +x 00:24:13.664 { 00:24:13.664 "subsystems": [ 00:24:13.664 { 00:24:13.664 "subsystem": "bdev", 00:24:13.664 "config": [ 00:24:13.664 { 00:24:13.664 "params": { 00:24:13.664 "trtype": "pcie", 00:24:13.664 "traddr": "0000:00:06.0", 00:24:13.664 "name": "Nvme0" 00:24:13.664 }, 00:24:13.664 "method": "bdev_nvme_attach_controller" 00:24:13.664 }, 00:24:13.664 { 00:24:13.664 "method": "bdev_wait_for_examine" 00:24:13.664 } 00:24:13.664 ] 00:24:13.664 } 00:24:13.664 ] 00:24:13.664 } 00:24:13.664 [2024-06-11 06:16:44.131881] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:13.664 [2024-06-11 06:16:44.132301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128729 ] 00:24:13.923 [2024-06-11 06:16:44.313968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.924 [2024-06-11 06:16:44.543607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.870  Copying: 60/60 [kB] (average 19 MBps) 00:24:15.870 00:24:15.870 06:16:46 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:15.870 06:16:46 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:24:15.870 06:16:46 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:15.870 06:16:46 -- dd/common.sh@11 -- # local nvme_ref= 00:24:15.870 06:16:46 -- dd/common.sh@12 -- # local size=61440 00:24:15.870 06:16:46 -- dd/common.sh@14 -- # local bs=1048576 00:24:15.870 06:16:46 -- dd/common.sh@15 -- # local count=1 00:24:15.871 06:16:46 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:15.871 06:16:46 -- dd/common.sh@18 -- # gen_conf 00:24:15.871 06:16:46 -- dd/common.sh@31 -- # xtrace_disable 00:24:15.871 06:16:46 -- common/autotest_common.sh@10 -- # set +x 00:24:15.871 { 00:24:15.871 "subsystems": [ 00:24:15.871 { 00:24:15.871 "subsystem": "bdev", 00:24:15.871 "config": [ 00:24:15.871 { 00:24:15.871 "params": { 00:24:15.871 "trtype": "pcie", 00:24:15.871 "traddr": "0000:00:06.0", 00:24:15.871 "name": "Nvme0" 00:24:15.871 }, 00:24:15.871 "method": "bdev_nvme_attach_controller" 00:24:15.871 }, 00:24:15.871 { 00:24:15.871 "method": "bdev_wait_for_examine" 00:24:15.871 } 00:24:15.871 ] 00:24:15.871 } 00:24:15.871 ] 00:24:15.871 } 00:24:16.129 [2024-06-11 06:16:46.537808] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:16.129 [2024-06-11 06:16:46.538474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128762 ] 00:24:16.129 [2024-06-11 06:16:46.717803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.387 [2024-06-11 06:16:46.948215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.332  Copying: 1024/1024 [kB] (average 500 MBps) 00:24:18.332 00:24:18.332 06:16:48 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:18.332 06:16:48 -- dd/basic_rw.sh@23 -- # count=15 00:24:18.332 06:16:48 -- dd/basic_rw.sh@24 -- # count=15 00:24:18.332 06:16:48 -- dd/basic_rw.sh@25 -- # size=61440 00:24:18.332 06:16:48 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:24:18.332 06:16:48 -- dd/common.sh@98 -- # xtrace_disable 00:24:18.332 06:16:48 -- common/autotest_common.sh@10 -- # set +x 00:24:18.591 06:16:49 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:24:18.591 06:16:49 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:18.591 06:16:49 -- dd/common.sh@31 -- # xtrace_disable 00:24:18.591 06:16:49 -- common/autotest_common.sh@10 -- # set +x 00:24:18.849 { 00:24:18.849 "subsystems": [ 00:24:18.849 { 00:24:18.849 "subsystem": "bdev", 00:24:18.849 "config": [ 00:24:18.849 { 00:24:18.849 "params": { 00:24:18.849 "trtype": "pcie", 00:24:18.849 "traddr": "0000:00:06.0", 00:24:18.849 "name": "Nvme0" 00:24:18.849 }, 00:24:18.849 "method": "bdev_nvme_attach_controller" 00:24:18.849 }, 00:24:18.849 { 00:24:18.849 "method": "bdev_wait_for_examine" 00:24:18.849 } 00:24:18.849 ] 00:24:18.849 } 00:24:18.849 ] 00:24:18.849 } 00:24:18.849 [2024-06-11 06:16:49.279940] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:18.849 [2024-06-11 06:16:49.280335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128804 ] 00:24:18.849 [2024-06-11 06:16:49.461294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.108 [2024-06-11 06:16:49.691358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.054  Copying: 60/60 [kB] (average 58 MBps) 00:24:21.054 00:24:21.054 06:16:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:24:21.054 06:16:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:21.054 06:16:51 -- dd/common.sh@31 -- # xtrace_disable 00:24:21.054 06:16:51 -- common/autotest_common.sh@10 -- # set +x 00:24:21.054 { 00:24:21.054 "subsystems": [ 00:24:21.054 { 00:24:21.054 "subsystem": "bdev", 00:24:21.054 "config": [ 00:24:21.054 { 00:24:21.054 "params": { 00:24:21.054 "trtype": "pcie", 00:24:21.054 "traddr": "0000:00:06.0", 00:24:21.054 "name": "Nvme0" 00:24:21.054 }, 00:24:21.054 "method": "bdev_nvme_attach_controller" 00:24:21.054 }, 00:24:21.054 { 00:24:21.054 "method": "bdev_wait_for_examine" 00:24:21.054 } 00:24:21.054 ] 00:24:21.054 } 00:24:21.054 ] 00:24:21.054 } 00:24:21.054 [2024-06-11 06:16:51.687024] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:21.054 [2024-06-11 06:16:51.687441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128836 ] 00:24:21.313 [2024-06-11 06:16:51.869382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.573 [2024-06-11 06:16:52.097345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.520  Copying: 60/60 [kB] (average 58 MBps) 00:24:23.520 00:24:23.520 06:16:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:23.520 06:16:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:24:23.520 06:16:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:23.520 06:16:53 -- dd/common.sh@11 -- # local nvme_ref= 00:24:23.520 06:16:53 -- dd/common.sh@12 -- # local size=61440 00:24:23.520 06:16:53 -- dd/common.sh@14 -- # local bs=1048576 00:24:23.520 06:16:53 -- dd/common.sh@15 -- # local count=1 00:24:23.520 06:16:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:23.520 06:16:53 -- dd/common.sh@18 -- # gen_conf 00:24:23.520 06:16:53 -- dd/common.sh@31 -- # xtrace_disable 00:24:23.520 06:16:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.520 { 00:24:23.520 "subsystems": [ 00:24:23.520 { 00:24:23.520 "subsystem": "bdev", 00:24:23.520 "config": [ 00:24:23.520 { 00:24:23.520 "params": { 00:24:23.520 "trtype": "pcie", 00:24:23.520 "traddr": "0000:00:06.0", 00:24:23.520 "name": "Nvme0" 00:24:23.520 }, 00:24:23.520 "method": "bdev_nvme_attach_controller" 00:24:23.520 }, 00:24:23.520 { 00:24:23.520 "method": "bdev_wait_for_examine" 00:24:23.520 } 00:24:23.520 ] 00:24:23.520 } 00:24:23.520 ] 00:24:23.520 } 00:24:23.520 [2024-06-11 06:16:53.937134] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:23.520 [2024-06-11 06:16:53.937398] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128876 ] 00:24:23.520 [2024-06-11 06:16:54.100786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.779 [2024-06-11 06:16:54.330457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.773  Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:25.773 00:24:25.773 06:16:56 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:25.773 06:16:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:25.773 06:16:56 -- dd/basic_rw.sh@23 -- # count=7 00:24:25.773 06:16:56 -- dd/basic_rw.sh@24 -- # count=7 00:24:25.773 06:16:56 -- dd/basic_rw.sh@25 -- # size=57344 00:24:25.773 06:16:56 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:24:25.773 06:16:56 -- dd/common.sh@98 -- # xtrace_disable 00:24:25.773 06:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.342 06:16:56 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:24:26.342 06:16:56 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:26.342 06:16:56 -- dd/common.sh@31 -- # xtrace_disable 00:24:26.342 06:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.342 { 00:24:26.342 "subsystems": [ 00:24:26.342 { 00:24:26.342 "subsystem": "bdev", 00:24:26.342 "config": [ 00:24:26.342 { 00:24:26.342 "params": { 00:24:26.342 "trtype": "pcie", 00:24:26.342 "traddr": "0000:00:06.0", 00:24:26.342 "name": "Nvme0" 00:24:26.342 }, 00:24:26.342 "method": "bdev_nvme_attach_controller" 00:24:26.342 }, 00:24:26.342 { 00:24:26.342 "method": "bdev_wait_for_examine" 00:24:26.342 } 00:24:26.342 ] 00:24:26.342 } 00:24:26.342 ] 00:24:26.342 } 00:24:26.342 [2024-06-11 06:16:56.805220] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:26.342 [2024-06-11 06:16:56.805631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128908 ] 00:24:26.342 [2024-06-11 06:16:56.979353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.601 [2024-06-11 06:16:57.228656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.550  Copying: 56/56 [kB] (average 27 MBps) 00:24:28.550 00:24:28.550 06:16:59 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:24:28.550 06:16:59 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:28.550 06:16:59 -- dd/common.sh@31 -- # xtrace_disable 00:24:28.550 06:16:59 -- common/autotest_common.sh@10 -- # set +x 00:24:28.550 { 00:24:28.550 "subsystems": [ 00:24:28.550 { 00:24:28.550 "subsystem": "bdev", 00:24:28.550 "config": [ 00:24:28.550 { 00:24:28.550 "params": { 00:24:28.550 "trtype": "pcie", 00:24:28.550 "traddr": "0000:00:06.0", 00:24:28.550 "name": "Nvme0" 00:24:28.550 }, 00:24:28.550 "method": "bdev_nvme_attach_controller" 00:24:28.550 }, 00:24:28.550 { 00:24:28.550 "method": "bdev_wait_for_examine" 00:24:28.550 } 00:24:28.550 ] 00:24:28.550 } 00:24:28.550 ] 00:24:28.550 } 00:24:28.550 [2024-06-11 06:16:59.089234] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:28.550 [2024-06-11 06:16:59.089627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128947 ] 00:24:28.809 [2024-06-11 06:16:59.273286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.068 [2024-06-11 06:16:59.509282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.014  Copying: 56/56 [kB] (average 54 MBps) 00:24:31.014 00:24:31.014 06:17:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:31.014 06:17:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:24:31.014 06:17:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:31.014 06:17:01 -- dd/common.sh@11 -- # local nvme_ref= 00:24:31.014 06:17:01 -- dd/common.sh@12 -- # local size=57344 00:24:31.014 06:17:01 -- dd/common.sh@14 -- # local bs=1048576 00:24:31.014 06:17:01 -- dd/common.sh@15 -- # local count=1 00:24:31.014 06:17:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:31.014 06:17:01 -- dd/common.sh@18 -- # gen_conf 00:24:31.014 06:17:01 -- dd/common.sh@31 -- # xtrace_disable 00:24:31.014 06:17:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.014 { 00:24:31.014 "subsystems": [ 00:24:31.014 { 00:24:31.014 "subsystem": "bdev", 00:24:31.014 "config": [ 00:24:31.014 { 00:24:31.014 "params": { 00:24:31.014 "trtype": "pcie", 00:24:31.014 "traddr": "0000:00:06.0", 00:24:31.014 "name": "Nvme0" 00:24:31.014 }, 00:24:31.014 "method": "bdev_nvme_attach_controller" 00:24:31.014 }, 00:24:31.014 { 00:24:31.014 "method": "bdev_wait_for_examine" 00:24:31.014 } 00:24:31.014 ] 00:24:31.014 } 00:24:31.014 ] 00:24:31.014 } 00:24:31.014 [2024-06-11 06:17:01.496706] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:31.014 [2024-06-11 06:17:01.496994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128984 ] 00:24:31.273 [2024-06-11 06:17:01.661000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.273 [2024-06-11 06:17:01.887471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.217  Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:33.217 00:24:33.217 06:17:03 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:33.217 06:17:03 -- dd/basic_rw.sh@23 -- # count=7 00:24:33.217 06:17:03 -- dd/basic_rw.sh@24 -- # count=7 00:24:33.217 06:17:03 -- dd/basic_rw.sh@25 -- # size=57344 00:24:33.217 06:17:03 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:24:33.217 06:17:03 -- dd/common.sh@98 -- # xtrace_disable 00:24:33.217 06:17:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 06:17:04 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:24:33.785 06:17:04 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:33.785 06:17:04 -- dd/common.sh@31 -- # xtrace_disable 00:24:33.785 06:17:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.785 { 00:24:33.785 "subsystems": [ 00:24:33.785 { 00:24:33.785 "subsystem": "bdev", 00:24:33.785 "config": [ 00:24:33.785 { 00:24:33.785 "params": { 00:24:33.785 "trtype": "pcie", 00:24:33.785 "traddr": "0000:00:06.0", 00:24:33.786 "name": "Nvme0" 00:24:33.786 }, 00:24:33.786 "method": "bdev_nvme_attach_controller" 00:24:33.786 }, 00:24:33.786 { 00:24:33.786 "method": "bdev_wait_for_examine" 00:24:33.786 } 00:24:33.786 ] 00:24:33.786 } 00:24:33.786 ] 00:24:33.786 } 00:24:33.786 [2024-06-11 06:17:04.236835] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:33.786 [2024-06-11 06:17:04.237116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129023 ] 00:24:33.786 [2024-06-11 06:17:04.402007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.045 [2024-06-11 06:17:04.649533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.991  Copying: 56/56 [kB] (average 54 MBps) 00:24:35.991 00:24:35.991 06:17:06 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:24:35.991 06:17:06 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:35.991 06:17:06 -- dd/common.sh@31 -- # xtrace_disable 00:24:35.991 06:17:06 -- common/autotest_common.sh@10 -- # set +x 00:24:35.991 { 00:24:35.991 "subsystems": [ 00:24:35.991 { 00:24:35.991 "subsystem": "bdev", 00:24:35.991 "config": [ 00:24:35.991 { 00:24:35.991 "params": { 00:24:35.991 "trtype": "pcie", 00:24:35.991 "traddr": "0000:00:06.0", 00:24:35.991 "name": "Nvme0" 00:24:35.991 }, 00:24:35.991 "method": "bdev_nvme_attach_controller" 00:24:35.991 }, 00:24:35.991 { 00:24:35.991 "method": "bdev_wait_for_examine" 00:24:35.991 } 00:24:35.991 ] 00:24:35.991 } 00:24:35.991 ] 00:24:35.991 } 00:24:36.250 [2024-06-11 06:17:06.641892] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:36.250 [2024-06-11 06:17:06.642298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129055 ] 00:24:36.250 [2024-06-11 06:17:06.818449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.509 [2024-06-11 06:17:07.070539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.456  Copying: 56/56 [kB] (average 54 MBps) 00:24:38.456 00:24:38.456 06:17:08 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:38.456 06:17:08 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:24:38.456 06:17:08 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:38.456 06:17:08 -- dd/common.sh@11 -- # local nvme_ref= 00:24:38.456 06:17:08 -- dd/common.sh@12 -- # local size=57344 00:24:38.456 06:17:08 -- dd/common.sh@14 -- # local bs=1048576 00:24:38.456 06:17:08 -- dd/common.sh@15 -- # local count=1 00:24:38.456 06:17:08 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:38.456 06:17:08 -- dd/common.sh@18 -- # gen_conf 00:24:38.456 06:17:08 -- dd/common.sh@31 -- # xtrace_disable 00:24:38.456 06:17:08 -- common/autotest_common.sh@10 -- # set +x 00:24:38.456 { 00:24:38.456 "subsystems": [ 00:24:38.456 { 00:24:38.456 "subsystem": "bdev", 00:24:38.456 "config": [ 00:24:38.456 { 00:24:38.456 "params": { 00:24:38.456 "trtype": "pcie", 00:24:38.456 "traddr": "0000:00:06.0", 00:24:38.456 "name": "Nvme0" 00:24:38.456 }, 00:24:38.456 "method": "bdev_nvme_attach_controller" 00:24:38.456 }, 00:24:38.456 { 00:24:38.456 "method": "bdev_wait_for_examine" 00:24:38.456 } 00:24:38.456 ] 00:24:38.456 } 00:24:38.456 ] 00:24:38.456 } 00:24:38.456 [2024-06-11 06:17:08.930094] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:38.456 [2024-06-11 06:17:08.930474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129088 ] 00:24:38.717 [2024-06-11 06:17:09.113934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.979 [2024-06-11 06:17:09.377047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.698  Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:40.698 00:24:40.698 06:17:11 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:40.698 06:17:11 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:40.698 06:17:11 -- dd/basic_rw.sh@23 -- # count=3 00:24:40.698 06:17:11 -- dd/basic_rw.sh@24 -- # count=3 00:24:40.698 06:17:11 -- dd/basic_rw.sh@25 -- # size=49152 00:24:40.698 06:17:11 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:24:40.698 06:17:11 -- dd/common.sh@98 -- # xtrace_disable 00:24:40.698 06:17:11 -- common/autotest_common.sh@10 -- # set +x 00:24:41.266 06:17:11 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:24:41.266 06:17:11 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:41.266 06:17:11 -- dd/common.sh@31 -- # xtrace_disable 00:24:41.266 06:17:11 -- common/autotest_common.sh@10 -- # set +x 00:24:41.266 { 00:24:41.266 "subsystems": [ 00:24:41.266 { 00:24:41.266 "subsystem": "bdev", 00:24:41.266 "config": [ 00:24:41.266 { 00:24:41.266 "params": { 00:24:41.266 "trtype": "pcie", 00:24:41.266 "traddr": "0000:00:06.0", 00:24:41.266 "name": "Nvme0" 00:24:41.266 }, 00:24:41.266 "method": "bdev_nvme_attach_controller" 00:24:41.266 }, 00:24:41.266 { 00:24:41.266 "method": "bdev_wait_for_examine" 00:24:41.266 } 00:24:41.266 ] 00:24:41.266 } 00:24:41.266 ] 00:24:41.266 } 00:24:41.266 [2024-06-11 06:17:11.763392] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:41.266 [2024-06-11 06:17:11.763773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129134 ] 00:24:41.525 [2024-06-11 06:17:11.945878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.785 [2024-06-11 06:17:12.180243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.423  Copying: 48/48 [kB] (average 46 MBps) 00:24:43.423 00:24:43.423 06:17:13 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:24:43.423 06:17:13 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:43.423 06:17:13 -- dd/common.sh@31 -- # xtrace_disable 00:24:43.423 06:17:13 -- common/autotest_common.sh@10 -- # set +x 00:24:43.423 { 00:24:43.423 "subsystems": [ 00:24:43.423 { 00:24:43.423 "subsystem": "bdev", 00:24:43.423 "config": [ 00:24:43.423 { 00:24:43.423 "params": { 00:24:43.423 "trtype": "pcie", 00:24:43.423 "traddr": "0000:00:06.0", 00:24:43.423 "name": "Nvme0" 00:24:43.423 }, 00:24:43.423 "method": "bdev_nvme_attach_controller" 00:24:43.423 }, 00:24:43.423 { 00:24:43.423 "method": "bdev_wait_for_examine" 00:24:43.423 } 00:24:43.423 ] 00:24:43.423 } 00:24:43.423 ] 00:24:43.423 } 00:24:43.423 [2024-06-11 06:17:14.062086] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:43.423 [2024-06-11 06:17:14.062479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129162 ] 00:24:43.683 [2024-06-11 06:17:14.245885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.942 [2024-06-11 06:17:14.494028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.888  Copying: 48/48 [kB] (average 46 MBps) 00:24:45.888 00:24:45.888 06:17:16 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:45.888 06:17:16 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:24:45.888 06:17:16 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:45.888 06:17:16 -- dd/common.sh@11 -- # local nvme_ref= 00:24:45.888 06:17:16 -- dd/common.sh@12 -- # local size=49152 00:24:45.888 06:17:16 -- dd/common.sh@14 -- # local bs=1048576 00:24:45.888 06:17:16 -- dd/common.sh@15 -- # local count=1 00:24:45.888 06:17:16 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:45.888 06:17:16 -- dd/common.sh@18 -- # gen_conf 00:24:45.888 06:17:16 -- dd/common.sh@31 -- # xtrace_disable 00:24:45.888 06:17:16 -- common/autotest_common.sh@10 -- # set +x 00:24:45.888 { 00:24:45.888 "subsystems": [ 00:24:45.888 { 00:24:45.888 "subsystem": "bdev", 00:24:45.888 "config": [ 00:24:45.888 { 00:24:45.888 "params": { 00:24:45.888 "trtype": "pcie", 00:24:45.888 "traddr": "0000:00:06.0", 00:24:45.888 "name": "Nvme0" 00:24:45.888 }, 00:24:45.888 "method": "bdev_nvme_attach_controller" 00:24:45.888 }, 00:24:45.888 { 00:24:45.888 "method": "bdev_wait_for_examine" 00:24:45.888 } 00:24:45.888 ] 00:24:45.888 } 00:24:45.888 ] 00:24:45.888 } 00:24:45.888 [2024-06-11 06:17:16.483880] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:45.888 [2024-06-11 06:17:16.484281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129197 ] 00:24:46.152 [2024-06-11 06:17:16.665990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.414 [2024-06-11 06:17:16.909720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.357  Copying: 1024/1024 [kB] (average 1000 MBps) 00:24:48.357 00:24:48.358 06:17:18 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:48.358 06:17:18 -- dd/basic_rw.sh@23 -- # count=3 00:24:48.358 06:17:18 -- dd/basic_rw.sh@24 -- # count=3 00:24:48.358 06:17:18 -- dd/basic_rw.sh@25 -- # size=49152 00:24:48.358 06:17:18 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:24:48.358 06:17:18 -- dd/common.sh@98 -- # xtrace_disable 00:24:48.358 06:17:18 -- common/autotest_common.sh@10 -- # set +x 00:24:48.617 06:17:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:24:48.617 06:17:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:48.617 06:17:19 -- dd/common.sh@31 -- # xtrace_disable 00:24:48.617 06:17:19 -- common/autotest_common.sh@10 -- # set +x 00:24:48.617 { 00:24:48.617 "subsystems": [ 00:24:48.617 { 00:24:48.617 "subsystem": "bdev", 00:24:48.617 "config": [ 00:24:48.617 { 00:24:48.617 "params": { 00:24:48.617 "trtype": "pcie", 00:24:48.617 "traddr": "0000:00:06.0", 00:24:48.617 "name": "Nvme0" 00:24:48.617 }, 00:24:48.617 "method": "bdev_nvme_attach_controller" 00:24:48.617 }, 00:24:48.617 { 00:24:48.617 "method": "bdev_wait_for_examine" 00:24:48.617 } 00:24:48.617 ] 00:24:48.617 } 00:24:48.617 ] 00:24:48.617 } 00:24:48.617 [2024-06-11 06:17:19.223879] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:48.617 [2024-06-11 06:17:19.224269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129238 ] 00:24:48.875 [2024-06-11 06:17:19.406507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.134 [2024-06-11 06:17:19.653949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.077  Copying: 48/48 [kB] (average 46 MBps) 00:24:51.077 00:24:51.077 06:17:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:24:51.077 06:17:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:24:51.077 06:17:21 -- dd/common.sh@31 -- # xtrace_disable 00:24:51.077 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:24:51.077 { 00:24:51.077 "subsystems": [ 00:24:51.077 { 00:24:51.077 "subsystem": "bdev", 00:24:51.077 "config": [ 00:24:51.077 { 00:24:51.077 "params": { 00:24:51.077 "trtype": "pcie", 00:24:51.077 "traddr": "0000:00:06.0", 00:24:51.077 "name": "Nvme0" 00:24:51.077 }, 00:24:51.077 "method": "bdev_nvme_attach_controller" 00:24:51.077 }, 00:24:51.077 { 00:24:51.077 "method": "bdev_wait_for_examine" 00:24:51.077 } 00:24:51.077 ] 00:24:51.077 } 00:24:51.077 ] 00:24:51.077 } 00:24:51.077 [2024-06-11 06:17:21.636940] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:51.077 [2024-06-11 06:17:21.637317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129279 ] 00:24:51.337 [2024-06-11 06:17:21.819495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.596 [2024-06-11 06:17:22.052881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.543  Copying: 48/48 [kB] (average 46 MBps) 00:24:53.543 00:24:53.543 06:17:23 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:53.543 06:17:23 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:24:53.543 06:17:23 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:24:53.543 06:17:23 -- dd/common.sh@11 -- # local nvme_ref= 00:24:53.543 06:17:23 -- dd/common.sh@12 -- # local size=49152 00:24:53.544 06:17:23 -- dd/common.sh@14 -- # local bs=1048576 00:24:53.544 06:17:23 -- dd/common.sh@15 -- # local count=1 00:24:53.544 06:17:23 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:24:53.544 06:17:23 -- dd/common.sh@18 -- # gen_conf 00:24:53.544 06:17:23 -- dd/common.sh@31 -- # xtrace_disable 00:24:53.544 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:24:53.544 { 00:24:53.544 "subsystems": [ 00:24:53.544 { 00:24:53.544 "subsystem": "bdev", 00:24:53.544 "config": [ 00:24:53.544 { 00:24:53.544 "params": { 00:24:53.544 "trtype": "pcie", 00:24:53.544 "traddr": "0000:00:06.0", 00:24:53.544 "name": "Nvme0" 00:24:53.544 }, 00:24:53.544 "method": "bdev_nvme_attach_controller" 00:24:53.544 }, 00:24:53.544 { 00:24:53.544 "method": "bdev_wait_for_examine" 00:24:53.544 } 00:24:53.544 ] 00:24:53.544 } 00:24:53.544 ] 00:24:53.544 } 00:24:53.544 [2024-06-11 06:17:23.938404] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:53.544 [2024-06-11 06:17:23.938816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129308 ] 00:24:53.544 [2024-06-11 06:17:24.122039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.803 [2024-06-11 06:17:24.368640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.751  Copying: 1024/1024 [kB] (average 500 MBps) 00:24:55.751 00:24:55.751 ************************************ 00:24:55.751 END TEST dd_rw 00:24:55.751 ************************************ 00:24:55.751 00:24:55.751 real 0m45.084s 00:24:55.751 user 0m37.201s 00:24:55.751 sys 0m6.629s 00:24:55.751 06:17:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.751 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:24:55.751 06:17:26 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:24:55.751 06:17:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:55.751 06:17:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:55.751 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:24:56.010 ************************************ 00:24:56.010 START TEST dd_rw_offset 00:24:56.010 ************************************ 00:24:56.010 06:17:26 -- common/autotest_common.sh@1104 -- # basic_offset 00:24:56.010 06:17:26 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:24:56.010 06:17:26 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:24:56.010 06:17:26 -- dd/common.sh@98 -- # xtrace_disable 00:24:56.010 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:24:56.010 06:17:26 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:24:56.011 06:17:26 -- dd/basic_rw.sh@56 -- # data=bsmf7v12ykvmxfz5oll59wpplirkhvzdkjupj3lfrd46c0r6gqi5rwp7l4h63yg55i1lsxm1yujg0n3g29g4ck1cdxfdpt2w8k6mjjyf6x6j7e0x59k4b975g4kjndb83jxz7kev2bubfqa7iky70m0uy0u7i0ou7n0g9x76ne46f6mfgprhxlfw0o9ruvr5jm73c36seaixvqnftj064mma6spg5ji8wsah9ojyv30uao4fcbudbjo9bf8dorhgh7qu73fkry8offi6ydmgfck83snom5fxd9jr7f54d6y7pcnorhx7tqctjm9qe8h6z2flkr3jao4cpe54iaa6lkl2obcanas17dnusiggmwzjr7lsdxjjyhqy5kujm61t225ng59d0q1j42djsllfkwqj4xo4ndoc0nla0i2h0u6efq4ddwaz0ia22qoludw0mtp5cq7veyybe27uatkgzc6wl4oo7ix4s9xgesdhc5mwpkwa19s7i6efrx29cacvei822xlhpscamh89zlt6rx6y05el4cas4kswjnfj0i5qorq1eftoay2zmgwjizbed6enzgl9rj1n0hoovygynxoc61vx9c5gr1wkqjccuhnheewu8qoy0wdzzexhpjjofz6sunenrmh7nu092m2frl4gyo83rkca8t21b1v86a1fabj1xq5efvaqeabbirtb2qr5qsxspky83b5qm6pn0blbf9hdfpj16irmn9lujab9soh6lxcby5v1sjsl37ml0d124yk9t2j5dnrtv96j577jcm78msy0p864v6himmhcne49euw27m8kwhw4xd1r1oy6w20ybdc96enxc4neg5yb0sc1mza2punsgs1mu1ss31m1kf81zdwspiaooan0tsetpcqm8kl9vzen7ti1p26abww4d5rwfd148rcxy8g07vr1lv03hn5jk9rwowhgzmab6b7xozm9bljgl6rv3qprtqsvyfv3bo74ryu4izio26qzqkuednctp3y1heakfit082h09vhhbe48u2dzy5qneand91rrva7qsnc3hhpxhqw4h9wgdwzagujah3cjx6fghypunpyy130ysixjajakpb2zle8jqhcd8h0gy1hunp5dc8aiin1kv1m53421d05nv2um4i9o9ucqyh2msaxuoeevoe2m1cvpx21m6gzmoofxdge9iit9cb3vahv65p13mcgmisyaci7yd3iuca252yweornnr94g1xmqyrhclua11mtvffda9a2gocjj1o7eteca7wofiwgl518myz44ne7h8s6sqii27x7njfb1iuytyetsrro3x4o6hwnm3xohoav7ix7ydffcugn7m8uhs0005wqq409y2lm3gsy00xv7ihz81w59lwjd2dkxsvpz2lejhoxfr0825h9y895rx7ntoqktpu24ckd2s8uikaepkrdwi7adm5765p8by4x10mfvz8unw8iagzen6hep890iwi590peri8auuaf87lwa3vxrsr4pwdjbgjuu8o9oo1qibt7gedvd20ee0sr785spt65gc1nrm1z0re1kyavp344mr6fy6fcf17o4pmsxnjjza4op6ay89lng7vdi5gwgtfxh1kc9p4yqvxft15rbj8a27rivhj3nng8huxnaqirrm0ooaryjqe31stdk3nimkqps9zvunfctfdj3zvgvauw29enf6itsh6bvftanjdj06fct4sxs739t4vr5e4gk8e1re1f09oj53a9njb37541drppksq072vyyfpeuuws0u3hq8dukian8d39voejawf0xt3eu8a7trw2qis8tig81156pqw7uom68v844gzpfsqu2ailnseintb5au9zsqsdxvbd6by3ob8pjsltdhmsmqe4keqhblfrjulqeyo18joe9fnpr0yfxih4v6ahx9af3oj5idp5tg4wj8quhlggkgstszduq5sead46kr4wf7r38hcpnk7d71gnjnbu3rsxrvwbqgkn5rl72ifohxhqodcmsad3k0fcmd73bpfaqybnwqql5klnirzb97gh4ok7b5p1j8dio71knu287coic117rizmogdj8nzz2sdpuegn5vo5e63jdgwme9p1wlu2wur4k2805ghjc3boguyzf72o12767wcmoaklqg0zjwtypr00wvao8b2z9gcc1bmrfjdrv3u5cicskb35tltnil222ipb2mqbeeme4y9df80jblkz149pgquywiwl87bk77rtyt5v3pgtkl7xfgz6tunof8lm0iezwbwc3ydapk9318hxngps9brtb4191dfdrn5vn2y7uyloyoyxrj2yyjp3bz8e6ucd70x3t3b3nh0uvio2ob4hstdmw317dxp0gmfvznwo1l0ranlt9cdz9md2fvn8pp05y3de680s0zhjkcwh4fl51bqohf5jz7ezw7zq7u5lm3ctqwo6nhlru74dptu7o0colvbeyy8ywow1ga1btc5ovq0vygip24r5ve2b27c2i2ehx22ce6kkrsum47kfjd8dqzrky9aq6x1me618qqji4gvkloxesmbkyn9o10xuzs1jii0cbkayso9nm86vsa21und7pu9oprp0bp7zncrsqw8o98t3moz592e5mt9o0abnpkq7c0lgprvnsgt8p0f98qt6hagjd8zblycop5kvxoqd1bqt9hj0p667uw6hxqs0irvbaosllpfqhms7d6vcikdbu8tgceb6vw89i5bgri8331a1n465zd1t2skn3jljj3w33b6dih4pfp5uqqt8ieeo4ztzuaawn8yeq1w90xejlb4te4vdnv15mgf8zd03l90293c85j2524lkcgcj8nd54701w65riiyexnyyrbymkfp52kg03po1fcvzbm11ehs72vqut1ptp6phvpa82wcl9hzxox2pgzz7mfq5ax99twltgbx909m3j3uutdkmd619vt89zxv2vnr353y4j04yjdtnz2bss11fddbj8msn3ix22739wkyvc922cyelewssl5ou6ut36wty4ge215nle0wduxk4ap60nu0c6690hjnar7kjfzdmzzkp7v5cpqaddzh1u0n2fgcef0k0xn2lorgsvj28yw3nqkplla1op6tpggbwu6zlhwoa6ywfqbw8xb8t42yjver0zu493b4ymf0dln2b32kuq0xdkce2cvb51dpdk8g00u43i3no0rtaxo0y4xtb2smlox9zl6cy7zdqh8jtsqphmf1bdt5qr9e3zaaqz9378q5ga7aqdfdotw71xuu96l6pnpyaigvvkgdpmdmnz0he67zuj8a10t1dbl6s8wzallnignbalk0x9tteaxlt3mj8mmf1fudofol5a5wou1ph0whvso7r3mh897lpfnvz92r02mch15nzsl7lx1wljssv2ne8zpk6fkhqjuzxydxtm6yfxhs5hutccwgn0uh9m6g9wj672em014j5o7wkl6yk9lle15vqddbo03qlgxf48hqp3aockzveiyk2t4a88fkv2jhj16c2oiehpd22e7w48zq85ajqtpsqhkqednyagsewve6p7rw21b0n0jsfi4oio4pjdqrguzy8rwf7kq3umo2h98svur85tdnu121hceys5v2niftg6idmr3wm7tqw20lqfhnctxbuk8j84n4t3poqm4hw2xe3rdytf2urx37s8jx2sww82f51iwnhi2qgz63kqpakysx12lyeqvw8apbtnow9dke5pq5j19g2f0mlobr1aru3m8sxo9q36u9uw6phonaotyypu88zcb0em0nu23zdg62s3uhslx67hz68jgbv1jdq19egji3g9axun0outxl50y40b02zmzljdxkfm7s7uj4eprvaqeqs5uxqkmt4gcexk9si4dbi3g09uqgq9sjz3fll3xb3zamw8vqgdxmqa5h04kkqd0zflmfsy3hqyqou8p61iuty2gadxtlz7pny5a7dte4gkrwjr8wv5lw8bqnnfsfzrztzhqc4t29o5qletd2qkmfshcskudwysox8lbxmut8eptgwi0d91 00:24:56.011 06:17:26 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:24:56.011 06:17:26 -- dd/basic_rw.sh@59 -- # gen_conf 00:24:56.011 06:17:26 -- dd/common.sh@31 -- # xtrace_disable 00:24:56.011 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:24:56.011 { 00:24:56.011 "subsystems": [ 00:24:56.011 { 00:24:56.011 "subsystem": "bdev", 00:24:56.011 "config": [ 00:24:56.011 { 00:24:56.011 "params": { 00:24:56.011 "trtype": "pcie", 00:24:56.011 "traddr": "0000:00:06.0", 00:24:56.011 "name": "Nvme0" 00:24:56.011 }, 00:24:56.011 "method": "bdev_nvme_attach_controller" 00:24:56.011 }, 00:24:56.011 { 00:24:56.011 "method": "bdev_wait_for_examine" 00:24:56.011 } 00:24:56.011 ] 00:24:56.011 } 00:24:56.011 ] 00:24:56.011 } 00:24:56.011 [2024-06-11 06:17:26.546651] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:56.011 [2024-06-11 06:17:26.547035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129366 ] 00:24:56.270 [2024-06-11 06:17:26.729228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.531 [2024-06-11 06:17:26.968652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.201  Copying: 4096/4096 [B] (average 4000 kBps) 00:24:58.201 00:24:58.201 06:17:28 -- dd/basic_rw.sh@65 -- # gen_conf 00:24:58.201 06:17:28 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:24:58.201 06:17:28 -- dd/common.sh@31 -- # xtrace_disable 00:24:58.201 06:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.201 { 00:24:58.201 "subsystems": [ 00:24:58.201 { 00:24:58.201 "subsystem": "bdev", 00:24:58.201 "config": [ 00:24:58.201 { 00:24:58.201 "params": { 00:24:58.201 "trtype": "pcie", 00:24:58.201 "traddr": "0000:00:06.0", 00:24:58.201 "name": "Nvme0" 00:24:58.201 }, 00:24:58.201 "method": "bdev_nvme_attach_controller" 00:24:58.201 }, 00:24:58.201 { 00:24:58.201 "method": "bdev_wait_for_examine" 00:24:58.201 } 00:24:58.201 ] 00:24:58.201 } 00:24:58.201 ] 00:24:58.201 } 00:24:58.461 [2024-06-11 06:17:28.858855] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:24:58.461 [2024-06-11 06:17:28.859520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129403 ] 00:24:58.461 [2024-06-11 06:17:29.041242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.720 [2024-06-11 06:17:29.288937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.669  Copying: 4096/4096 [B] (average 4000 kBps) 00:25:00.669 00:25:00.669 06:17:31 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:00.670 06:17:31 -- dd/basic_rw.sh@72 -- # [[ bsmf7v12ykvmxfz5oll59wpplirkhvzdkjupj3lfrd46c0r6gqi5rwp7l4h63yg55i1lsxm1yujg0n3g29g4ck1cdxfdpt2w8k6mjjyf6x6j7e0x59k4b975g4kjndb83jxz7kev2bubfqa7iky70m0uy0u7i0ou7n0g9x76ne46f6mfgprhxlfw0o9ruvr5jm73c36seaixvqnftj064mma6spg5ji8wsah9ojyv30uao4fcbudbjo9bf8dorhgh7qu73fkry8offi6ydmgfck83snom5fxd9jr7f54d6y7pcnorhx7tqctjm9qe8h6z2flkr3jao4cpe54iaa6lkl2obcanas17dnusiggmwzjr7lsdxjjyhqy5kujm61t225ng59d0q1j42djsllfkwqj4xo4ndoc0nla0i2h0u6efq4ddwaz0ia22qoludw0mtp5cq7veyybe27uatkgzc6wl4oo7ix4s9xgesdhc5mwpkwa19s7i6efrx29cacvei822xlhpscamh89zlt6rx6y05el4cas4kswjnfj0i5qorq1eftoay2zmgwjizbed6enzgl9rj1n0hoovygynxoc61vx9c5gr1wkqjccuhnheewu8qoy0wdzzexhpjjofz6sunenrmh7nu092m2frl4gyo83rkca8t21b1v86a1fabj1xq5efvaqeabbirtb2qr5qsxspky83b5qm6pn0blbf9hdfpj16irmn9lujab9soh6lxcby5v1sjsl37ml0d124yk9t2j5dnrtv96j577jcm78msy0p864v6himmhcne49euw27m8kwhw4xd1r1oy6w20ybdc96enxc4neg5yb0sc1mza2punsgs1mu1ss31m1kf81zdwspiaooan0tsetpcqm8kl9vzen7ti1p26abww4d5rwfd148rcxy8g07vr1lv03hn5jk9rwowhgzmab6b7xozm9bljgl6rv3qprtqsvyfv3bo74ryu4izio26qzqkuednctp3y1heakfit082h09vhhbe48u2dzy5qneand91rrva7qsnc3hhpxhqw4h9wgdwzagujah3cjx6fghypunpyy130ysixjajakpb2zle8jqhcd8h0gy1hunp5dc8aiin1kv1m53421d05nv2um4i9o9ucqyh2msaxuoeevoe2m1cvpx21m6gzmoofxdge9iit9cb3vahv65p13mcgmisyaci7yd3iuca252yweornnr94g1xmqyrhclua11mtvffda9a2gocjj1o7eteca7wofiwgl518myz44ne7h8s6sqii27x7njfb1iuytyetsrro3x4o6hwnm3xohoav7ix7ydffcugn7m8uhs0005wqq409y2lm3gsy00xv7ihz81w59lwjd2dkxsvpz2lejhoxfr0825h9y895rx7ntoqktpu24ckd2s8uikaepkrdwi7adm5765p8by4x10mfvz8unw8iagzen6hep890iwi590peri8auuaf87lwa3vxrsr4pwdjbgjuu8o9oo1qibt7gedvd20ee0sr785spt65gc1nrm1z0re1kyavp344mr6fy6fcf17o4pmsxnjjza4op6ay89lng7vdi5gwgtfxh1kc9p4yqvxft15rbj8a27rivhj3nng8huxnaqirrm0ooaryjqe31stdk3nimkqps9zvunfctfdj3zvgvauw29enf6itsh6bvftanjdj06fct4sxs739t4vr5e4gk8e1re1f09oj53a9njb37541drppksq072vyyfpeuuws0u3hq8dukian8d39voejawf0xt3eu8a7trw2qis8tig81156pqw7uom68v844gzpfsqu2ailnseintb5au9zsqsdxvbd6by3ob8pjsltdhmsmqe4keqhblfrjulqeyo18joe9fnpr0yfxih4v6ahx9af3oj5idp5tg4wj8quhlggkgstszduq5sead46kr4wf7r38hcpnk7d71gnjnbu3rsxrvwbqgkn5rl72ifohxhqodcmsad3k0fcmd73bpfaqybnwqql5klnirzb97gh4ok7b5p1j8dio71knu287coic117rizmogdj8nzz2sdpuegn5vo5e63jdgwme9p1wlu2wur4k2805ghjc3boguyzf72o12767wcmoaklqg0zjwtypr00wvao8b2z9gcc1bmrfjdrv3u5cicskb35tltnil222ipb2mqbeeme4y9df80jblkz149pgquywiwl87bk77rtyt5v3pgtkl7xfgz6tunof8lm0iezwbwc3ydapk9318hxngps9brtb4191dfdrn5vn2y7uyloyoyxrj2yyjp3bz8e6ucd70x3t3b3nh0uvio2ob4hstdmw317dxp0gmfvznwo1l0ranlt9cdz9md2fvn8pp05y3de680s0zhjkcwh4fl51bqohf5jz7ezw7zq7u5lm3ctqwo6nhlru74dptu7o0colvbeyy8ywow1ga1btc5ovq0vygip24r5ve2b27c2i2ehx22ce6kkrsum47kfjd8dqzrky9aq6x1me618qqji4gvkloxesmbkyn9o10xuzs1jii0cbkayso9nm86vsa21und7pu9oprp0bp7zncrsqw8o98t3moz592e5mt9o0abnpkq7c0lgprvnsgt8p0f98qt6hagjd8zblycop5kvxoqd1bqt9hj0p667uw6hxqs0irvbaosllpfqhms7d6vcikdbu8tgceb6vw89i5bgri8331a1n465zd1t2skn3jljj3w33b6dih4pfp5uqqt8ieeo4ztzuaawn8yeq1w90xejlb4te4vdnv15mgf8zd03l90293c85j2524lkcgcj8nd54701w65riiyexnyyrbymkfp52kg03po1fcvzbm11ehs72vqut1ptp6phvpa82wcl9hzxox2pgzz7mfq5ax99twltgbx909m3j3uutdkmd619vt89zxv2vnr353y4j04yjdtnz2bss11fddbj8msn3ix22739wkyvc922cyelewssl5ou6ut36wty4ge215nle0wduxk4ap60nu0c6690hjnar7kjfzdmzzkp7v5cpqaddzh1u0n2fgcef0k0xn2lorgsvj28yw3nqkplla1op6tpggbwu6zlhwoa6ywfqbw8xb8t42yjver0zu493b4ymf0dln2b32kuq0xdkce2cvb51dpdk8g00u43i3no0rtaxo0y4xtb2smlox9zl6cy7zdqh8jtsqphmf1bdt5qr9e3zaaqz9378q5ga7aqdfdotw71xuu96l6pnpyaigvvkgdpmdmnz0he67zuj8a10t1dbl6s8wzallnignbalk0x9tteaxlt3mj8mmf1fudofol5a5wou1ph0whvso7r3mh897lpfnvz92r02mch15nzsl7lx1wljssv2ne8zpk6fkhqjuzxydxtm6yfxhs5hutccwgn0uh9m6g9wj672em014j5o7wkl6yk9lle15vqddbo03qlgxf48hqp3aockzveiyk2t4a88fkv2jhj16c2oiehpd22e7w48zq85ajqtpsqhkqednyagsewve6p7rw21b0n0jsfi4oio4pjdqrguzy8rwf7kq3umo2h98svur85tdnu121hceys5v2niftg6idmr3wm7tqw20lqfhnctxbuk8j84n4t3poqm4hw2xe3rdytf2urx37s8jx2sww82f51iwnhi2qgz63kqpakysx12lyeqvw8apbtnow9dke5pq5j19g2f0mlobr1aru3m8sxo9q36u9uw6phonaotyypu88zcb0em0nu23zdg62s3uhslx67hz68jgbv1jdq19egji3g9axun0outxl50y40b02zmzljdxkfm7s7uj4eprvaqeqs5uxqkmt4gcexk9si4dbi3g09uqgq9sjz3fll3xb3zamw8vqgdxmqa5h04kkqd0zflmfsy3hqyqou8p61iuty2gadxtlz7pny5a7dte4gkrwjr8wv5lw8bqnnfsfzrztzhqc4t29o5qletd2qkmfshcskudwysox8lbxmut8eptgwi0d91 == \b\s\m\f\7\v\1\2\y\k\v\m\x\f\z\5\o\l\l\5\9\w\p\p\l\i\r\k\h\v\z\d\k\j\u\p\j\3\l\f\r\d\4\6\c\0\r\6\g\q\i\5\r\w\p\7\l\4\h\6\3\y\g\5\5\i\1\l\s\x\m\1\y\u\j\g\0\n\3\g\2\9\g\4\c\k\1\c\d\x\f\d\p\t\2\w\8\k\6\m\j\j\y\f\6\x\6\j\7\e\0\x\5\9\k\4\b\9\7\5\g\4\k\j\n\d\b\8\3\j\x\z\7\k\e\v\2\b\u\b\f\q\a\7\i\k\y\7\0\m\0\u\y\0\u\7\i\0\o\u\7\n\0\g\9\x\7\6\n\e\4\6\f\6\m\f\g\p\r\h\x\l\f\w\0\o\9\r\u\v\r\5\j\m\7\3\c\3\6\s\e\a\i\x\v\q\n\f\t\j\0\6\4\m\m\a\6\s\p\g\5\j\i\8\w\s\a\h\9\o\j\y\v\3\0\u\a\o\4\f\c\b\u\d\b\j\o\9\b\f\8\d\o\r\h\g\h\7\q\u\7\3\f\k\r\y\8\o\f\f\i\6\y\d\m\g\f\c\k\8\3\s\n\o\m\5\f\x\d\9\j\r\7\f\5\4\d\6\y\7\p\c\n\o\r\h\x\7\t\q\c\t\j\m\9\q\e\8\h\6\z\2\f\l\k\r\3\j\a\o\4\c\p\e\5\4\i\a\a\6\l\k\l\2\o\b\c\a\n\a\s\1\7\d\n\u\s\i\g\g\m\w\z\j\r\7\l\s\d\x\j\j\y\h\q\y\5\k\u\j\m\6\1\t\2\2\5\n\g\5\9\d\0\q\1\j\4\2\d\j\s\l\l\f\k\w\q\j\4\x\o\4\n\d\o\c\0\n\l\a\0\i\2\h\0\u\6\e\f\q\4\d\d\w\a\z\0\i\a\2\2\q\o\l\u\d\w\0\m\t\p\5\c\q\7\v\e\y\y\b\e\2\7\u\a\t\k\g\z\c\6\w\l\4\o\o\7\i\x\4\s\9\x\g\e\s\d\h\c\5\m\w\p\k\w\a\1\9\s\7\i\6\e\f\r\x\2\9\c\a\c\v\e\i\8\2\2\x\l\h\p\s\c\a\m\h\8\9\z\l\t\6\r\x\6\y\0\5\e\l\4\c\a\s\4\k\s\w\j\n\f\j\0\i\5\q\o\r\q\1\e\f\t\o\a\y\2\z\m\g\w\j\i\z\b\e\d\6\e\n\z\g\l\9\r\j\1\n\0\h\o\o\v\y\g\y\n\x\o\c\6\1\v\x\9\c\5\g\r\1\w\k\q\j\c\c\u\h\n\h\e\e\w\u\8\q\o\y\0\w\d\z\z\e\x\h\p\j\j\o\f\z\6\s\u\n\e\n\r\m\h\7\n\u\0\9\2\m\2\f\r\l\4\g\y\o\8\3\r\k\c\a\8\t\2\1\b\1\v\8\6\a\1\f\a\b\j\1\x\q\5\e\f\v\a\q\e\a\b\b\i\r\t\b\2\q\r\5\q\s\x\s\p\k\y\8\3\b\5\q\m\6\p\n\0\b\l\b\f\9\h\d\f\p\j\1\6\i\r\m\n\9\l\u\j\a\b\9\s\o\h\6\l\x\c\b\y\5\v\1\s\j\s\l\3\7\m\l\0\d\1\2\4\y\k\9\t\2\j\5\d\n\r\t\v\9\6\j\5\7\7\j\c\m\7\8\m\s\y\0\p\8\6\4\v\6\h\i\m\m\h\c\n\e\4\9\e\u\w\2\7\m\8\k\w\h\w\4\x\d\1\r\1\o\y\6\w\2\0\y\b\d\c\9\6\e\n\x\c\4\n\e\g\5\y\b\0\s\c\1\m\z\a\2\p\u\n\s\g\s\1\m\u\1\s\s\3\1\m\1\k\f\8\1\z\d\w\s\p\i\a\o\o\a\n\0\t\s\e\t\p\c\q\m\8\k\l\9\v\z\e\n\7\t\i\1\p\2\6\a\b\w\w\4\d\5\r\w\f\d\1\4\8\r\c\x\y\8\g\0\7\v\r\1\l\v\0\3\h\n\5\j\k\9\r\w\o\w\h\g\z\m\a\b\6\b\7\x\o\z\m\9\b\l\j\g\l\6\r\v\3\q\p\r\t\q\s\v\y\f\v\3\b\o\7\4\r\y\u\4\i\z\i\o\2\6\q\z\q\k\u\e\d\n\c\t\p\3\y\1\h\e\a\k\f\i\t\0\8\2\h\0\9\v\h\h\b\e\4\8\u\2\d\z\y\5\q\n\e\a\n\d\9\1\r\r\v\a\7\q\s\n\c\3\h\h\p\x\h\q\w\4\h\9\w\g\d\w\z\a\g\u\j\a\h\3\c\j\x\6\f\g\h\y\p\u\n\p\y\y\1\3\0\y\s\i\x\j\a\j\a\k\p\b\2\z\l\e\8\j\q\h\c\d\8\h\0\g\y\1\h\u\n\p\5\d\c\8\a\i\i\n\1\k\v\1\m\5\3\4\2\1\d\0\5\n\v\2\u\m\4\i\9\o\9\u\c\q\y\h\2\m\s\a\x\u\o\e\e\v\o\e\2\m\1\c\v\p\x\2\1\m\6\g\z\m\o\o\f\x\d\g\e\9\i\i\t\9\c\b\3\v\a\h\v\6\5\p\1\3\m\c\g\m\i\s\y\a\c\i\7\y\d\3\i\u\c\a\2\5\2\y\w\e\o\r\n\n\r\9\4\g\1\x\m\q\y\r\h\c\l\u\a\1\1\m\t\v\f\f\d\a\9\a\2\g\o\c\j\j\1\o\7\e\t\e\c\a\7\w\o\f\i\w\g\l\5\1\8\m\y\z\4\4\n\e\7\h\8\s\6\s\q\i\i\2\7\x\7\n\j\f\b\1\i\u\y\t\y\e\t\s\r\r\o\3\x\4\o\6\h\w\n\m\3\x\o\h\o\a\v\7\i\x\7\y\d\f\f\c\u\g\n\7\m\8\u\h\s\0\0\0\5\w\q\q\4\0\9\y\2\l\m\3\g\s\y\0\0\x\v\7\i\h\z\8\1\w\5\9\l\w\j\d\2\d\k\x\s\v\p\z\2\l\e\j\h\o\x\f\r\0\8\2\5\h\9\y\8\9\5\r\x\7\n\t\o\q\k\t\p\u\2\4\c\k\d\2\s\8\u\i\k\a\e\p\k\r\d\w\i\7\a\d\m\5\7\6\5\p\8\b\y\4\x\1\0\m\f\v\z\8\u\n\w\8\i\a\g\z\e\n\6\h\e\p\8\9\0\i\w\i\5\9\0\p\e\r\i\8\a\u\u\a\f\8\7\l\w\a\3\v\x\r\s\r\4\p\w\d\j\b\g\j\u\u\8\o\9\o\o\1\q\i\b\t\7\g\e\d\v\d\2\0\e\e\0\s\r\7\8\5\s\p\t\6\5\g\c\1\n\r\m\1\z\0\r\e\1\k\y\a\v\p\3\4\4\m\r\6\f\y\6\f\c\f\1\7\o\4\p\m\s\x\n\j\j\z\a\4\o\p\6\a\y\8\9\l\n\g\7\v\d\i\5\g\w\g\t\f\x\h\1\k\c\9\p\4\y\q\v\x\f\t\1\5\r\b\j\8\a\2\7\r\i\v\h\j\3\n\n\g\8\h\u\x\n\a\q\i\r\r\m\0\o\o\a\r\y\j\q\e\3\1\s\t\d\k\3\n\i\m\k\q\p\s\9\z\v\u\n\f\c\t\f\d\j\3\z\v\g\v\a\u\w\2\9\e\n\f\6\i\t\s\h\6\b\v\f\t\a\n\j\d\j\0\6\f\c\t\4\s\x\s\7\3\9\t\4\v\r\5\e\4\g\k\8\e\1\r\e\1\f\0\9\o\j\5\3\a\9\n\j\b\3\7\5\4\1\d\r\p\p\k\s\q\0\7\2\v\y\y\f\p\e\u\u\w\s\0\u\3\h\q\8\d\u\k\i\a\n\8\d\3\9\v\o\e\j\a\w\f\0\x\t\3\e\u\8\a\7\t\r\w\2\q\i\s\8\t\i\g\8\1\1\5\6\p\q\w\7\u\o\m\6\8\v\8\4\4\g\z\p\f\s\q\u\2\a\i\l\n\s\e\i\n\t\b\5\a\u\9\z\s\q\s\d\x\v\b\d\6\b\y\3\o\b\8\p\j\s\l\t\d\h\m\s\m\q\e\4\k\e\q\h\b\l\f\r\j\u\l\q\e\y\o\1\8\j\o\e\9\f\n\p\r\0\y\f\x\i\h\4\v\6\a\h\x\9\a\f\3\o\j\5\i\d\p\5\t\g\4\w\j\8\q\u\h\l\g\g\k\g\s\t\s\z\d\u\q\5\s\e\a\d\4\6\k\r\4\w\f\7\r\3\8\h\c\p\n\k\7\d\7\1\g\n\j\n\b\u\3\r\s\x\r\v\w\b\q\g\k\n\5\r\l\7\2\i\f\o\h\x\h\q\o\d\c\m\s\a\d\3\k\0\f\c\m\d\7\3\b\p\f\a\q\y\b\n\w\q\q\l\5\k\l\n\i\r\z\b\9\7\g\h\4\o\k\7\b\5\p\1\j\8\d\i\o\7\1\k\n\u\2\8\7\c\o\i\c\1\1\7\r\i\z\m\o\g\d\j\8\n\z\z\2\s\d\p\u\e\g\n\5\v\o\5\e\6\3\j\d\g\w\m\e\9\p\1\w\l\u\2\w\u\r\4\k\2\8\0\5\g\h\j\c\3\b\o\g\u\y\z\f\7\2\o\1\2\7\6\7\w\c\m\o\a\k\l\q\g\0\z\j\w\t\y\p\r\0\0\w\v\a\o\8\b\2\z\9\g\c\c\1\b\m\r\f\j\d\r\v\3\u\5\c\i\c\s\k\b\3\5\t\l\t\n\i\l\2\2\2\i\p\b\2\m\q\b\e\e\m\e\4\y\9\d\f\8\0\j\b\l\k\z\1\4\9\p\g\q\u\y\w\i\w\l\8\7\b\k\7\7\r\t\y\t\5\v\3\p\g\t\k\l\7\x\f\g\z\6\t\u\n\o\f\8\l\m\0\i\e\z\w\b\w\c\3\y\d\a\p\k\9\3\1\8\h\x\n\g\p\s\9\b\r\t\b\4\1\9\1\d\f\d\r\n\5\v\n\2\y\7\u\y\l\o\y\o\y\x\r\j\2\y\y\j\p\3\b\z\8\e\6\u\c\d\7\0\x\3\t\3\b\3\n\h\0\u\v\i\o\2\o\b\4\h\s\t\d\m\w\3\1\7\d\x\p\0\g\m\f\v\z\n\w\o\1\l\0\r\a\n\l\t\9\c\d\z\9\m\d\2\f\v\n\8\p\p\0\5\y\3\d\e\6\8\0\s\0\z\h\j\k\c\w\h\4\f\l\5\1\b\q\o\h\f\5\j\z\7\e\z\w\7\z\q\7\u\5\l\m\3\c\t\q\w\o\6\n\h\l\r\u\7\4\d\p\t\u\7\o\0\c\o\l\v\b\e\y\y\8\y\w\o\w\1\g\a\1\b\t\c\5\o\v\q\0\v\y\g\i\p\2\4\r\5\v\e\2\b\2\7\c\2\i\2\e\h\x\2\2\c\e\6\k\k\r\s\u\m\4\7\k\f\j\d\8\d\q\z\r\k\y\9\a\q\6\x\1\m\e\6\1\8\q\q\j\i\4\g\v\k\l\o\x\e\s\m\b\k\y\n\9\o\1\0\x\u\z\s\1\j\i\i\0\c\b\k\a\y\s\o\9\n\m\8\6\v\s\a\2\1\u\n\d\7\p\u\9\o\p\r\p\0\b\p\7\z\n\c\r\s\q\w\8\o\9\8\t\3\m\o\z\5\9\2\e\5\m\t\9\o\0\a\b\n\p\k\q\7\c\0\l\g\p\r\v\n\s\g\t\8\p\0\f\9\8\q\t\6\h\a\g\j\d\8\z\b\l\y\c\o\p\5\k\v\x\o\q\d\1\b\q\t\9\h\j\0\p\6\6\7\u\w\6\h\x\q\s\0\i\r\v\b\a\o\s\l\l\p\f\q\h\m\s\7\d\6\v\c\i\k\d\b\u\8\t\g\c\e\b\6\v\w\8\9\i\5\b\g\r\i\8\3\3\1\a\1\n\4\6\5\z\d\1\t\2\s\k\n\3\j\l\j\j\3\w\3\3\b\6\d\i\h\4\p\f\p\5\u\q\q\t\8\i\e\e\o\4\z\t\z\u\a\a\w\n\8\y\e\q\1\w\9\0\x\e\j\l\b\4\t\e\4\v\d\n\v\1\5\m\g\f\8\z\d\0\3\l\9\0\2\9\3\c\8\5\j\2\5\2\4\l\k\c\g\c\j\8\n\d\5\4\7\0\1\w\6\5\r\i\i\y\e\x\n\y\y\r\b\y\m\k\f\p\5\2\k\g\0\3\p\o\1\f\c\v\z\b\m\1\1\e\h\s\7\2\v\q\u\t\1\p\t\p\6\p\h\v\p\a\8\2\w\c\l\9\h\z\x\o\x\2\p\g\z\z\7\m\f\q\5\a\x\9\9\t\w\l\t\g\b\x\9\0\9\m\3\j\3\u\u\t\d\k\m\d\6\1\9\v\t\8\9\z\x\v\2\v\n\r\3\5\3\y\4\j\0\4\y\j\d\t\n\z\2\b\s\s\1\1\f\d\d\b\j\8\m\s\n\3\i\x\2\2\7\3\9\w\k\y\v\c\9\2\2\c\y\e\l\e\w\s\s\l\5\o\u\6\u\t\3\6\w\t\y\4\g\e\2\1\5\n\l\e\0\w\d\u\x\k\4\a\p\6\0\n\u\0\c\6\6\9\0\h\j\n\a\r\7\k\j\f\z\d\m\z\z\k\p\7\v\5\c\p\q\a\d\d\z\h\1\u\0\n\2\f\g\c\e\f\0\k\0\x\n\2\l\o\r\g\s\v\j\2\8\y\w\3\n\q\k\p\l\l\a\1\o\p\6\t\p\g\g\b\w\u\6\z\l\h\w\o\a\6\y\w\f\q\b\w\8\x\b\8\t\4\2\y\j\v\e\r\0\z\u\4\9\3\b\4\y\m\f\0\d\l\n\2\b\3\2\k\u\q\0\x\d\k\c\e\2\c\v\b\5\1\d\p\d\k\8\g\0\0\u\4\3\i\3\n\o\0\r\t\a\x\o\0\y\4\x\t\b\2\s\m\l\o\x\9\z\l\6\c\y\7\z\d\q\h\8\j\t\s\q\p\h\m\f\1\b\d\t\5\q\r\9\e\3\z\a\a\q\z\9\3\7\8\q\5\g\a\7\a\q\d\f\d\o\t\w\7\1\x\u\u\9\6\l\6\p\n\p\y\a\i\g\v\v\k\g\d\p\m\d\m\n\z\0\h\e\6\7\z\u\j\8\a\1\0\t\1\d\b\l\6\s\8\w\z\a\l\l\n\i\g\n\b\a\l\k\0\x\9\t\t\e\a\x\l\t\3\m\j\8\m\m\f\1\f\u\d\o\f\o\l\5\a\5\w\o\u\1\p\h\0\w\h\v\s\o\7\r\3\m\h\8\9\7\l\p\f\n\v\z\9\2\r\0\2\m\c\h\1\5\n\z\s\l\7\l\x\1\w\l\j\s\s\v\2\n\e\8\z\p\k\6\f\k\h\q\j\u\z\x\y\d\x\t\m\6\y\f\x\h\s\5\h\u\t\c\c\w\g\n\0\u\h\9\m\6\g\9\w\j\6\7\2\e\m\0\1\4\j\5\o\7\w\k\l\6\y\k\9\l\l\e\1\5\v\q\d\d\b\o\0\3\q\l\g\x\f\4\8\h\q\p\3\a\o\c\k\z\v\e\i\y\k\2\t\4\a\8\8\f\k\v\2\j\h\j\1\6\c\2\o\i\e\h\p\d\2\2\e\7\w\4\8\z\q\8\5\a\j\q\t\p\s\q\h\k\q\e\d\n\y\a\g\s\e\w\v\e\6\p\7\r\w\2\1\b\0\n\0\j\s\f\i\4\o\i\o\4\p\j\d\q\r\g\u\z\y\8\r\w\f\7\k\q\3\u\m\o\2\h\9\8\s\v\u\r\8\5\t\d\n\u\1\2\1\h\c\e\y\s\5\v\2\n\i\f\t\g\6\i\d\m\r\3\w\m\7\t\q\w\2\0\l\q\f\h\n\c\t\x\b\u\k\8\j\8\4\n\4\t\3\p\o\q\m\4\h\w\2\x\e\3\r\d\y\t\f\2\u\r\x\3\7\s\8\j\x\2\s\w\w\8\2\f\5\1\i\w\n\h\i\2\q\g\z\6\3\k\q\p\a\k\y\s\x\1\2\l\y\e\q\v\w\8\a\p\b\t\n\o\w\9\d\k\e\5\p\q\5\j\1\9\g\2\f\0\m\l\o\b\r\1\a\r\u\3\m\8\s\x\o\9\q\3\6\u\9\u\w\6\p\h\o\n\a\o\t\y\y\p\u\8\8\z\c\b\0\e\m\0\n\u\2\3\z\d\g\6\2\s\3\u\h\s\l\x\6\7\h\z\6\8\j\g\b\v\1\j\d\q\1\9\e\g\j\i\3\g\9\a\x\u\n\0\o\u\t\x\l\5\0\y\4\0\b\0\2\z\m\z\l\j\d\x\k\f\m\7\s\7\u\j\4\e\p\r\v\a\q\e\q\s\5\u\x\q\k\m\t\4\g\c\e\x\k\9\s\i\4\d\b\i\3\g\0\9\u\q\g\q\9\s\j\z\3\f\l\l\3\x\b\3\z\a\m\w\8\v\q\g\d\x\m\q\a\5\h\0\4\k\k\q\d\0\z\f\l\m\f\s\y\3\h\q\y\q\o\u\8\p\6\1\i\u\t\y\2\g\a\d\x\t\l\z\7\p\n\y\5\a\7\d\t\e\4\g\k\r\w\j\r\8\w\v\5\l\w\8\b\q\n\n\f\s\f\z\r\z\t\z\h\q\c\4\t\2\9\o\5\q\l\e\t\d\2\q\k\m\f\s\h\c\s\k\u\d\w\y\s\o\x\8\l\b\x\m\u\t\8\e\p\t\g\w\i\0\d\9\1 ]] 00:25:00.670 00:25:00.670 real 0m4.830s 00:25:00.670 user 0m3.973s 00:25:00.670 sys 0m0.696s 00:25:00.670 06:17:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.670 ************************************ 00:25:00.670 06:17:31 -- common/autotest_common.sh@10 -- # set +x 00:25:00.670 END TEST dd_rw_offset 00:25:00.670 ************************************ 00:25:00.670 06:17:31 -- dd/basic_rw.sh@1 -- # cleanup 00:25:00.670 06:17:31 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:00.670 06:17:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:00.670 06:17:31 -- dd/common.sh@11 -- # local nvme_ref= 00:25:00.670 06:17:31 -- dd/common.sh@12 -- # local size=0xffff 00:25:00.670 06:17:31 -- dd/common.sh@14 -- # local bs=1048576 00:25:00.670 06:17:31 -- dd/common.sh@15 -- # local count=1 00:25:00.670 06:17:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:00.670 06:17:31 -- dd/common.sh@18 -- # gen_conf 00:25:00.670 06:17:31 -- dd/common.sh@31 -- # xtrace_disable 00:25:00.670 06:17:31 -- common/autotest_common.sh@10 -- # set +x 00:25:00.929 { 00:25:00.929 "subsystems": [ 00:25:00.929 { 00:25:00.929 "subsystem": "bdev", 00:25:00.929 "config": [ 00:25:00.929 { 00:25:00.929 "params": { 00:25:00.929 "trtype": "pcie", 00:25:00.929 "traddr": "0000:00:06.0", 00:25:00.929 "name": "Nvme0" 00:25:00.929 }, 00:25:00.929 "method": "bdev_nvme_attach_controller" 00:25:00.929 }, 00:25:00.929 { 00:25:00.929 "method": "bdev_wait_for_examine" 00:25:00.929 } 00:25:00.929 ] 00:25:00.929 } 00:25:00.929 ] 00:25:00.929 } 00:25:00.929 [2024-06-11 06:17:31.361239] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:00.929 [2024-06-11 06:17:31.361533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129452 ] 00:25:00.929 [2024-06-11 06:17:31.525740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.188 [2024-06-11 06:17:31.773925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.138  Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:03.138 00:25:03.138 06:17:33 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:03.138 ************************************ 00:25:03.138 END TEST spdk_dd_basic_rw 00:25:03.138 ************************************ 00:25:03.138 00:25:03.138 real 0m55.336s 00:25:03.138 user 0m45.438s 00:25:03.138 sys 0m8.299s 00:25:03.138 06:17:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.138 06:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.138 06:17:33 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:03.138 06:17:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.138 06:17:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.138 06:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.397 ************************************ 00:25:03.397 START TEST spdk_dd_posix 00:25:03.397 ************************************ 00:25:03.397 06:17:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:03.397 * Looking for test storage... 00:25:03.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:03.397 06:17:33 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.397 06:17:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.397 06:17:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.397 06:17:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.397 06:17:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.397 06:17:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.397 06:17:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.397 06:17:33 -- paths/export.sh@5 -- # export PATH 00:25:03.397 06:17:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:03.397 06:17:33 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:03.397 06:17:33 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:03.397 06:17:33 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:03.397 06:17:33 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:03.397 06:17:33 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:03.397 06:17:33 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:03.397 06:17:33 -- dd/posix.sh@130 -- # tests 00:25:03.397 06:17:33 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:25:03.397 * First test run, using AIO 00:25:03.397 06:17:33 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:03.397 06:17:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:03.397 06:17:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.397 06:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.397 ************************************ 00:25:03.397 START TEST dd_flag_append 00:25:03.397 ************************************ 00:25:03.397 06:17:33 -- common/autotest_common.sh@1104 -- # append 00:25:03.397 06:17:33 -- dd/posix.sh@16 -- # local dump0 00:25:03.397 06:17:33 -- dd/posix.sh@17 -- # local dump1 00:25:03.397 06:17:33 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:03.397 06:17:33 -- dd/common.sh@98 -- # xtrace_disable 00:25:03.397 06:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.397 06:17:33 -- dd/posix.sh@19 -- # dump0=6td51wku4yf5mxj1sb3h71bp2gk50y32 00:25:03.397 06:17:33 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:03.397 06:17:33 -- dd/common.sh@98 -- # xtrace_disable 00:25:03.397 06:17:33 -- common/autotest_common.sh@10 -- # set +x 00:25:03.397 06:17:33 -- dd/posix.sh@20 -- # dump1=1k88l2658ixkakwclt0lk0t1a0ks06es 00:25:03.397 06:17:33 -- dd/posix.sh@22 -- # printf %s 6td51wku4yf5mxj1sb3h71bp2gk50y32 00:25:03.397 06:17:33 -- dd/posix.sh@23 -- # printf %s 1k88l2658ixkakwclt0lk0t1a0ks06es 00:25:03.397 06:17:33 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:03.397 [2024-06-11 06:17:34.011197] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:03.398 [2024-06-11 06:17:34.011787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129550 ] 00:25:03.657 [2024-06-11 06:17:34.190409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.916 [2024-06-11 06:17:34.465969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.862  Copying: 32/32 [B] (average 31 kBps) 00:25:05.862 00:25:05.862 ************************************ 00:25:05.862 END TEST dd_flag_append 00:25:05.862 ************************************ 00:25:05.862 06:17:36 -- dd/posix.sh@27 -- # [[ 1k88l2658ixkakwclt0lk0t1a0ks06es6td51wku4yf5mxj1sb3h71bp2gk50y32 == \1\k\8\8\l\2\6\5\8\i\x\k\a\k\w\c\l\t\0\l\k\0\t\1\a\0\k\s\0\6\e\s\6\t\d\5\1\w\k\u\4\y\f\5\m\x\j\1\s\b\3\h\7\1\b\p\2\g\k\5\0\y\3\2 ]] 00:25:05.862 00:25:05.862 real 0m2.469s 00:25:05.862 user 0m2.004s 00:25:05.862 sys 0m0.330s 00:25:05.862 06:17:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.862 06:17:36 -- common/autotest_common.sh@10 -- # set +x 00:25:05.862 06:17:36 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:05.862 06:17:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:05.862 06:17:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:05.862 06:17:36 -- common/autotest_common.sh@10 -- # set +x 00:25:05.862 ************************************ 00:25:05.862 START TEST dd_flag_directory 00:25:05.862 ************************************ 00:25:05.862 06:17:36 -- common/autotest_common.sh@1104 -- # directory 00:25:05.862 06:17:36 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:05.862 06:17:36 -- common/autotest_common.sh@640 -- # local es=0 00:25:05.862 06:17:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:05.862 06:17:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.862 06:17:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.862 06:17:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.862 06:17:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.862 06:17:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.862 06:17:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.862 06:17:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:05.862 06:17:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:05.862 06:17:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:06.121 [2024-06-11 06:17:36.514999] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:06.121 [2024-06-11 06:17:36.515393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129603 ] 00:25:06.121 [2024-06-11 06:17:36.697834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.380 [2024-06-11 06:17:36.942013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.948 [2024-06-11 06:17:37.341123] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:06.948 [2024-06-11 06:17:37.341458] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:06.948 [2024-06-11 06:17:37.341519] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:07.884 [2024-06-11 06:17:38.284540] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:08.452 06:17:38 -- common/autotest_common.sh@643 -- # es=236 00:25:08.452 06:17:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:08.452 06:17:38 -- common/autotest_common.sh@652 -- # es=108 00:25:08.452 06:17:38 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:08.452 06:17:38 -- common/autotest_common.sh@660 -- # es=1 00:25:08.452 06:17:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:08.452 06:17:38 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:08.452 06:17:38 -- common/autotest_common.sh@640 -- # local es=0 00:25:08.452 06:17:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:08.452 06:17:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:08.452 06:17:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:08.452 06:17:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:08.452 06:17:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:08.452 06:17:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:08.452 06:17:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:08.452 06:17:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:08.452 06:17:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:08.452 06:17:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:08.452 [2024-06-11 06:17:38.886880] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:08.452 [2024-06-11 06:17:38.887284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129630 ] 00:25:08.452 [2024-06-11 06:17:39.069287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.711 [2024-06-11 06:17:39.309502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.279 [2024-06-11 06:17:39.708471] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:09.279 [2024-06-11 06:17:39.708805] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:09.279 [2024-06-11 06:17:39.708869] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:10.214 [2024-06-11 06:17:40.653121] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:10.782 ************************************ 00:25:10.782 END TEST dd_flag_directory 00:25:10.782 ************************************ 00:25:10.782 06:17:41 -- common/autotest_common.sh@643 -- # es=236 00:25:10.782 06:17:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:10.782 06:17:41 -- common/autotest_common.sh@652 -- # es=108 00:25:10.782 06:17:41 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:10.782 06:17:41 -- common/autotest_common.sh@660 -- # es=1 00:25:10.782 06:17:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:10.782 00:25:10.782 real 0m4.732s 00:25:10.782 user 0m3.848s 00:25:10.782 sys 0m0.681s 00:25:10.782 06:17:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.782 06:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:10.782 06:17:41 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:10.782 06:17:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:10.782 06:17:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:10.782 06:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:10.782 ************************************ 00:25:10.782 START TEST dd_flag_nofollow 00:25:10.782 ************************************ 00:25:10.782 06:17:41 -- common/autotest_common.sh@1104 -- # nofollow 00:25:10.782 06:17:41 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:10.782 06:17:41 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:10.782 06:17:41 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:10.782 06:17:41 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:10.782 06:17:41 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:10.782 06:17:41 -- common/autotest_common.sh@640 -- # local es=0 00:25:10.782 06:17:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:10.782 06:17:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.782 06:17:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:10.782 06:17:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.782 06:17:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:10.782 06:17:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.782 06:17:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:10.783 06:17:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:10.783 06:17:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:10.783 06:17:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:10.783 [2024-06-11 06:17:41.320091] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:10.783 [2024-06-11 06:17:41.320906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129689 ] 00:25:11.041 [2024-06-11 06:17:41.502044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.300 [2024-06-11 06:17:41.744367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.558 [2024-06-11 06:17:42.148804] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:11.558 [2024-06-11 06:17:42.149065] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:11.558 [2024-06-11 06:17:42.149126] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:12.495 [2024-06-11 06:17:43.085647] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:13.062 06:17:43 -- common/autotest_common.sh@643 -- # es=216 00:25:13.062 06:17:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:13.063 06:17:43 -- common/autotest_common.sh@652 -- # es=88 00:25:13.063 06:17:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:13.063 06:17:43 -- common/autotest_common.sh@660 -- # es=1 00:25:13.063 06:17:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:13.063 06:17:43 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:13.063 06:17:43 -- common/autotest_common.sh@640 -- # local es=0 00:25:13.063 06:17:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:13.063 06:17:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.063 06:17:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.063 06:17:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.063 06:17:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.063 06:17:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.063 06:17:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:13.063 06:17:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:13.063 06:17:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:13.063 06:17:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:13.063 [2024-06-11 06:17:43.694837] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:13.063 [2024-06-11 06:17:43.695061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129722 ] 00:25:13.322 [2024-06-11 06:17:43.877117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.580 [2024-06-11 06:17:44.133948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.147 [2024-06-11 06:17:44.528626] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:14.147 [2024-06-11 06:17:44.528732] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:14.147 [2024-06-11 06:17:44.528759] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:15.083 [2024-06-11 06:17:45.451111] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:15.342 06:17:45 -- common/autotest_common.sh@643 -- # es=216 00:25:15.342 06:17:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:15.342 06:17:45 -- common/autotest_common.sh@652 -- # es=88 00:25:15.342 06:17:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:15.342 06:17:45 -- common/autotest_common.sh@660 -- # es=1 00:25:15.342 06:17:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:15.342 06:17:45 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:15.342 06:17:45 -- dd/common.sh@98 -- # xtrace_disable 00:25:15.342 06:17:45 -- common/autotest_common.sh@10 -- # set +x 00:25:15.342 06:17:45 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:15.601 [2024-06-11 06:17:46.042772] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:15.601 [2024-06-11 06:17:46.043014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129749 ] 00:25:15.601 [2024-06-11 06:17:46.225645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.859 [2024-06-11 06:17:46.481816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.825  Copying: 512/512 [B] (average 500 kBps) 00:25:17.825 00:25:17.826 06:17:48 -- dd/posix.sh@49 -- # [[ h91lzimbzmntmql5vfhgh724kk36soiuj8y3s0gobbwq23gjoiwlkmzpavmkdzn3d6f7i4j3qib9ac74h116cnflfackrqyhc2yyssd8fuqo37j78jhfs2ftguiokvug9yu5195w8j8dpgvp5qqq9tot5m57fxccep40tjfosc1j6fsk39yleqh7jcg6xkqcc5o53bss50eb5gwlqewutz8gbtn2dh2xh6qgha05gcfynihi4zouw25rqxyf66leti20l48et2oyrhh2ue5bk69mtgjwh55wbt40ql1h500o3rn4jwskebg2mawt3tq9ub92d34w75lsobof6fhf0thew5nx9hdkus7970gzpl7cll83ydsgh1y3h5xy4e1xj9dyfjoa19ageo7frjbgpcpkjns6k6g846uarwmrdpp2w3mjtfo2x4pkhq0200g1vnigcososg7tnlz0bay2laolov0ecl3leoli5tgdkdxeo8bpe2nyefyj2pp4no11 == \h\9\1\l\z\i\m\b\z\m\n\t\m\q\l\5\v\f\h\g\h\7\2\4\k\k\3\6\s\o\i\u\j\8\y\3\s\0\g\o\b\b\w\q\2\3\g\j\o\i\w\l\k\m\z\p\a\v\m\k\d\z\n\3\d\6\f\7\i\4\j\3\q\i\b\9\a\c\7\4\h\1\1\6\c\n\f\l\f\a\c\k\r\q\y\h\c\2\y\y\s\s\d\8\f\u\q\o\3\7\j\7\8\j\h\f\s\2\f\t\g\u\i\o\k\v\u\g\9\y\u\5\1\9\5\w\8\j\8\d\p\g\v\p\5\q\q\q\9\t\o\t\5\m\5\7\f\x\c\c\e\p\4\0\t\j\f\o\s\c\1\j\6\f\s\k\3\9\y\l\e\q\h\7\j\c\g\6\x\k\q\c\c\5\o\5\3\b\s\s\5\0\e\b\5\g\w\l\q\e\w\u\t\z\8\g\b\t\n\2\d\h\2\x\h\6\q\g\h\a\0\5\g\c\f\y\n\i\h\i\4\z\o\u\w\2\5\r\q\x\y\f\6\6\l\e\t\i\2\0\l\4\8\e\t\2\o\y\r\h\h\2\u\e\5\b\k\6\9\m\t\g\j\w\h\5\5\w\b\t\4\0\q\l\1\h\5\0\0\o\3\r\n\4\j\w\s\k\e\b\g\2\m\a\w\t\3\t\q\9\u\b\9\2\d\3\4\w\7\5\l\s\o\b\o\f\6\f\h\f\0\t\h\e\w\5\n\x\9\h\d\k\u\s\7\9\7\0\g\z\p\l\7\c\l\l\8\3\y\d\s\g\h\1\y\3\h\5\x\y\4\e\1\x\j\9\d\y\f\j\o\a\1\9\a\g\e\o\7\f\r\j\b\g\p\c\p\k\j\n\s\6\k\6\g\8\4\6\u\a\r\w\m\r\d\p\p\2\w\3\m\j\t\f\o\2\x\4\p\k\h\q\0\2\0\0\g\1\v\n\i\g\c\o\s\o\s\g\7\t\n\l\z\0\b\a\y\2\l\a\o\l\o\v\0\e\c\l\3\l\e\o\l\i\5\t\g\d\k\d\x\e\o\8\b\p\e\2\n\y\e\f\y\j\2\p\p\4\n\o\1\1 ]] 00:25:17.826 00:25:17.826 real 0m7.100s 00:25:17.826 user 0m5.780s 00:25:17.826 sys 0m0.989s 00:25:17.826 ************************************ 00:25:17.826 END TEST dd_flag_nofollow 00:25:17.826 ************************************ 00:25:17.826 06:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.826 06:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.826 06:17:48 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:17.826 06:17:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:17.826 06:17:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:17.826 06:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.826 ************************************ 00:25:17.826 START TEST dd_flag_noatime 00:25:17.826 ************************************ 00:25:17.826 06:17:48 -- common/autotest_common.sh@1104 -- # noatime 00:25:17.826 06:17:48 -- dd/posix.sh@53 -- # local atime_if 00:25:17.826 06:17:48 -- dd/posix.sh@54 -- # local atime_of 00:25:17.826 06:17:48 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:17.826 06:17:48 -- dd/common.sh@98 -- # xtrace_disable 00:25:17.826 06:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.826 06:17:48 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:17.826 06:17:48 -- dd/posix.sh@60 -- # atime_if=1718086666 00:25:17.826 06:17:48 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:17.826 06:17:48 -- dd/posix.sh@61 -- # atime_of=1718086668 00:25:17.826 06:17:48 -- dd/posix.sh@66 -- # sleep 1 00:25:18.799 06:17:49 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:19.058 [2024-06-11 06:17:49.499308] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:19.058 [2024-06-11 06:17:49.499543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129825 ] 00:25:19.058 [2024-06-11 06:17:49.686712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.624 [2024-06-11 06:17:49.967497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.257  Copying: 512/512 [B] (average 500 kBps) 00:25:21.257 00:25:21.257 06:17:51 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:21.257 06:17:51 -- dd/posix.sh@69 -- # (( atime_if == 1718086666 )) 00:25:21.257 06:17:51 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:21.257 06:17:51 -- dd/posix.sh@70 -- # (( atime_of == 1718086668 )) 00:25:21.257 06:17:51 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:21.515 [2024-06-11 06:17:51.929951] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:21.515 [2024-06-11 06:17:51.930187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129859 ] 00:25:21.515 [2024-06-11 06:17:52.112644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.773 [2024-06-11 06:17:52.344153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.716  Copying: 512/512 [B] (average 500 kBps) 00:25:23.716 00:25:23.716 06:17:54 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:23.716 06:17:54 -- dd/posix.sh@73 -- # (( atime_if < 1718086672 )) 00:25:23.716 00:25:23.716 real 0m5.801s 00:25:23.716 user 0m3.883s 00:25:23.716 sys 0m0.638s 00:25:23.716 06:17:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.716 06:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.716 ************************************ 00:25:23.716 END TEST dd_flag_noatime 00:25:23.716 ************************************ 00:25:23.716 06:17:54 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:25:23.716 06:17:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:23.716 06:17:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:23.716 06:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.716 ************************************ 00:25:23.716 START TEST dd_flags_misc 00:25:23.716 ************************************ 00:25:23.716 06:17:54 -- common/autotest_common.sh@1104 -- # io 00:25:23.716 06:17:54 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:23.716 06:17:54 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:23.716 06:17:54 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:23.716 06:17:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:23.716 06:17:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:23.716 06:17:54 -- dd/common.sh@98 -- # xtrace_disable 00:25:23.716 06:17:54 -- common/autotest_common.sh@10 -- # set +x 00:25:23.716 06:17:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:23.716 06:17:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:23.716 [2024-06-11 06:17:54.344480] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:23.716 [2024-06-11 06:17:54.344685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129907 ] 00:25:23.975 [2024-06-11 06:17:54.529415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.233 [2024-06-11 06:17:54.766286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.177  Copying: 512/512 [B] (average 500 kBps) 00:25:26.177 00:25:26.177 06:17:56 -- dd/posix.sh@93 -- # [[ thq2t73oahs8bxsetmy65sriqpa0vbra5h18ugkl2gj4bkm7wwxvuifwxomry7yf0eyumhxjihfrfvy9ckpyjzkslbcfb7ace7tblpa8ks341sn0ksldnt7fo9hemyldtmuulnz1d8ilv2qhawt82ph272ho5rl66j4dctsaqeb9k0lo9phy08skplv99oqnq9sl1h2wsm1n5lk86x2hm1afvcu6jgwbff5109vhnfg2cyr59pt5m6tvsnu0jdzavhhc4jis5vw1rrjg3o52x36flvwwcv4g9winjlmmzf79lpjtugj2lob2sskmrlarjx4ufdmhls558ad1xh25ufa5k4jciu7bdr92pial55e8y6ti5n78lftnru727kfukmkxv7ahod0r8kgpd1ctke4r2sq4lffrrd06rullqtom71zf69rohjyqospn5qzkw9d5ly5chu73yw8i2me0m1dafmzidvp6nljn8659zpmo9qs189iu3miq7a8ckk7i == \t\h\q\2\t\7\3\o\a\h\s\8\b\x\s\e\t\m\y\6\5\s\r\i\q\p\a\0\v\b\r\a\5\h\1\8\u\g\k\l\2\g\j\4\b\k\m\7\w\w\x\v\u\i\f\w\x\o\m\r\y\7\y\f\0\e\y\u\m\h\x\j\i\h\f\r\f\v\y\9\c\k\p\y\j\z\k\s\l\b\c\f\b\7\a\c\e\7\t\b\l\p\a\8\k\s\3\4\1\s\n\0\k\s\l\d\n\t\7\f\o\9\h\e\m\y\l\d\t\m\u\u\l\n\z\1\d\8\i\l\v\2\q\h\a\w\t\8\2\p\h\2\7\2\h\o\5\r\l\6\6\j\4\d\c\t\s\a\q\e\b\9\k\0\l\o\9\p\h\y\0\8\s\k\p\l\v\9\9\o\q\n\q\9\s\l\1\h\2\w\s\m\1\n\5\l\k\8\6\x\2\h\m\1\a\f\v\c\u\6\j\g\w\b\f\f\5\1\0\9\v\h\n\f\g\2\c\y\r\5\9\p\t\5\m\6\t\v\s\n\u\0\j\d\z\a\v\h\h\c\4\j\i\s\5\v\w\1\r\r\j\g\3\o\5\2\x\3\6\f\l\v\w\w\c\v\4\g\9\w\i\n\j\l\m\m\z\f\7\9\l\p\j\t\u\g\j\2\l\o\b\2\s\s\k\m\r\l\a\r\j\x\4\u\f\d\m\h\l\s\5\5\8\a\d\1\x\h\2\5\u\f\a\5\k\4\j\c\i\u\7\b\d\r\9\2\p\i\a\l\5\5\e\8\y\6\t\i\5\n\7\8\l\f\t\n\r\u\7\2\7\k\f\u\k\m\k\x\v\7\a\h\o\d\0\r\8\k\g\p\d\1\c\t\k\e\4\r\2\s\q\4\l\f\f\r\r\d\0\6\r\u\l\l\q\t\o\m\7\1\z\f\6\9\r\o\h\j\y\q\o\s\p\n\5\q\z\k\w\9\d\5\l\y\5\c\h\u\7\3\y\w\8\i\2\m\e\0\m\1\d\a\f\m\z\i\d\v\p\6\n\l\j\n\8\6\5\9\z\p\m\o\9\q\s\1\8\9\i\u\3\m\i\q\7\a\8\c\k\k\7\i ]] 00:25:26.177 06:17:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:26.177 06:17:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:26.177 [2024-06-11 06:17:56.686473] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:26.177 [2024-06-11 06:17:56.687254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129947 ] 00:25:26.435 [2024-06-11 06:17:56.869739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.694 [2024-06-11 06:17:57.103217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.329  Copying: 512/512 [B] (average 500 kBps) 00:25:28.329 00:25:28.329 06:17:58 -- dd/posix.sh@93 -- # [[ thq2t73oahs8bxsetmy65sriqpa0vbra5h18ugkl2gj4bkm7wwxvuifwxomry7yf0eyumhxjihfrfvy9ckpyjzkslbcfb7ace7tblpa8ks341sn0ksldnt7fo9hemyldtmuulnz1d8ilv2qhawt82ph272ho5rl66j4dctsaqeb9k0lo9phy08skplv99oqnq9sl1h2wsm1n5lk86x2hm1afvcu6jgwbff5109vhnfg2cyr59pt5m6tvsnu0jdzavhhc4jis5vw1rrjg3o52x36flvwwcv4g9winjlmmzf79lpjtugj2lob2sskmrlarjx4ufdmhls558ad1xh25ufa5k4jciu7bdr92pial55e8y6ti5n78lftnru727kfukmkxv7ahod0r8kgpd1ctke4r2sq4lffrrd06rullqtom71zf69rohjyqospn5qzkw9d5ly5chu73yw8i2me0m1dafmzidvp6nljn8659zpmo9qs189iu3miq7a8ckk7i == \t\h\q\2\t\7\3\o\a\h\s\8\b\x\s\e\t\m\y\6\5\s\r\i\q\p\a\0\v\b\r\a\5\h\1\8\u\g\k\l\2\g\j\4\b\k\m\7\w\w\x\v\u\i\f\w\x\o\m\r\y\7\y\f\0\e\y\u\m\h\x\j\i\h\f\r\f\v\y\9\c\k\p\y\j\z\k\s\l\b\c\f\b\7\a\c\e\7\t\b\l\p\a\8\k\s\3\4\1\s\n\0\k\s\l\d\n\t\7\f\o\9\h\e\m\y\l\d\t\m\u\u\l\n\z\1\d\8\i\l\v\2\q\h\a\w\t\8\2\p\h\2\7\2\h\o\5\r\l\6\6\j\4\d\c\t\s\a\q\e\b\9\k\0\l\o\9\p\h\y\0\8\s\k\p\l\v\9\9\o\q\n\q\9\s\l\1\h\2\w\s\m\1\n\5\l\k\8\6\x\2\h\m\1\a\f\v\c\u\6\j\g\w\b\f\f\5\1\0\9\v\h\n\f\g\2\c\y\r\5\9\p\t\5\m\6\t\v\s\n\u\0\j\d\z\a\v\h\h\c\4\j\i\s\5\v\w\1\r\r\j\g\3\o\5\2\x\3\6\f\l\v\w\w\c\v\4\g\9\w\i\n\j\l\m\m\z\f\7\9\l\p\j\t\u\g\j\2\l\o\b\2\s\s\k\m\r\l\a\r\j\x\4\u\f\d\m\h\l\s\5\5\8\a\d\1\x\h\2\5\u\f\a\5\k\4\j\c\i\u\7\b\d\r\9\2\p\i\a\l\5\5\e\8\y\6\t\i\5\n\7\8\l\f\t\n\r\u\7\2\7\k\f\u\k\m\k\x\v\7\a\h\o\d\0\r\8\k\g\p\d\1\c\t\k\e\4\r\2\s\q\4\l\f\f\r\r\d\0\6\r\u\l\l\q\t\o\m\7\1\z\f\6\9\r\o\h\j\y\q\o\s\p\n\5\q\z\k\w\9\d\5\l\y\5\c\h\u\7\3\y\w\8\i\2\m\e\0\m\1\d\a\f\m\z\i\d\v\p\6\n\l\j\n\8\6\5\9\z\p\m\o\9\q\s\1\8\9\i\u\3\m\i\q\7\a\8\c\k\k\7\i ]] 00:25:28.329 06:17:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:28.329 06:17:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:28.588 [2024-06-11 06:17:59.035068] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:28.588 [2024-06-11 06:17:59.035317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129976 ] 00:25:28.588 [2024-06-11 06:17:59.217520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.846 [2024-06-11 06:17:59.470489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.829  Copying: 512/512 [B] (average 250 kBps) 00:25:30.829 00:25:30.829 06:18:01 -- dd/posix.sh@93 -- # [[ thq2t73oahs8bxsetmy65sriqpa0vbra5h18ugkl2gj4bkm7wwxvuifwxomry7yf0eyumhxjihfrfvy9ckpyjzkslbcfb7ace7tblpa8ks341sn0ksldnt7fo9hemyldtmuulnz1d8ilv2qhawt82ph272ho5rl66j4dctsaqeb9k0lo9phy08skplv99oqnq9sl1h2wsm1n5lk86x2hm1afvcu6jgwbff5109vhnfg2cyr59pt5m6tvsnu0jdzavhhc4jis5vw1rrjg3o52x36flvwwcv4g9winjlmmzf79lpjtugj2lob2sskmrlarjx4ufdmhls558ad1xh25ufa5k4jciu7bdr92pial55e8y6ti5n78lftnru727kfukmkxv7ahod0r8kgpd1ctke4r2sq4lffrrd06rullqtom71zf69rohjyqospn5qzkw9d5ly5chu73yw8i2me0m1dafmzidvp6nljn8659zpmo9qs189iu3miq7a8ckk7i == \t\h\q\2\t\7\3\o\a\h\s\8\b\x\s\e\t\m\y\6\5\s\r\i\q\p\a\0\v\b\r\a\5\h\1\8\u\g\k\l\2\g\j\4\b\k\m\7\w\w\x\v\u\i\f\w\x\o\m\r\y\7\y\f\0\e\y\u\m\h\x\j\i\h\f\r\f\v\y\9\c\k\p\y\j\z\k\s\l\b\c\f\b\7\a\c\e\7\t\b\l\p\a\8\k\s\3\4\1\s\n\0\k\s\l\d\n\t\7\f\o\9\h\e\m\y\l\d\t\m\u\u\l\n\z\1\d\8\i\l\v\2\q\h\a\w\t\8\2\p\h\2\7\2\h\o\5\r\l\6\6\j\4\d\c\t\s\a\q\e\b\9\k\0\l\o\9\p\h\y\0\8\s\k\p\l\v\9\9\o\q\n\q\9\s\l\1\h\2\w\s\m\1\n\5\l\k\8\6\x\2\h\m\1\a\f\v\c\u\6\j\g\w\b\f\f\5\1\0\9\v\h\n\f\g\2\c\y\r\5\9\p\t\5\m\6\t\v\s\n\u\0\j\d\z\a\v\h\h\c\4\j\i\s\5\v\w\1\r\r\j\g\3\o\5\2\x\3\6\f\l\v\w\w\c\v\4\g\9\w\i\n\j\l\m\m\z\f\7\9\l\p\j\t\u\g\j\2\l\o\b\2\s\s\k\m\r\l\a\r\j\x\4\u\f\d\m\h\l\s\5\5\8\a\d\1\x\h\2\5\u\f\a\5\k\4\j\c\i\u\7\b\d\r\9\2\p\i\a\l\5\5\e\8\y\6\t\i\5\n\7\8\l\f\t\n\r\u\7\2\7\k\f\u\k\m\k\x\v\7\a\h\o\d\0\r\8\k\g\p\d\1\c\t\k\e\4\r\2\s\q\4\l\f\f\r\r\d\0\6\r\u\l\l\q\t\o\m\7\1\z\f\6\9\r\o\h\j\y\q\o\s\p\n\5\q\z\k\w\9\d\5\l\y\5\c\h\u\7\3\y\w\8\i\2\m\e\0\m\1\d\a\f\m\z\i\d\v\p\6\n\l\j\n\8\6\5\9\z\p\m\o\9\q\s\1\8\9\i\u\3\m\i\q\7\a\8\c\k\k\7\i ]] 00:25:30.829 06:18:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:30.829 06:18:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:30.829 [2024-06-11 06:18:01.437492] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:30.829 [2024-06-11 06:18:01.437706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130007 ] 00:25:31.087 [2024-06-11 06:18:01.615548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.345 [2024-06-11 06:18:01.894172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.292  Copying: 512/512 [B] (average 166 kBps) 00:25:33.292 00:25:33.293 06:18:03 -- dd/posix.sh@93 -- # [[ thq2t73oahs8bxsetmy65sriqpa0vbra5h18ugkl2gj4bkm7wwxvuifwxomry7yf0eyumhxjihfrfvy9ckpyjzkslbcfb7ace7tblpa8ks341sn0ksldnt7fo9hemyldtmuulnz1d8ilv2qhawt82ph272ho5rl66j4dctsaqeb9k0lo9phy08skplv99oqnq9sl1h2wsm1n5lk86x2hm1afvcu6jgwbff5109vhnfg2cyr59pt5m6tvsnu0jdzavhhc4jis5vw1rrjg3o52x36flvwwcv4g9winjlmmzf79lpjtugj2lob2sskmrlarjx4ufdmhls558ad1xh25ufa5k4jciu7bdr92pial55e8y6ti5n78lftnru727kfukmkxv7ahod0r8kgpd1ctke4r2sq4lffrrd06rullqtom71zf69rohjyqospn5qzkw9d5ly5chu73yw8i2me0m1dafmzidvp6nljn8659zpmo9qs189iu3miq7a8ckk7i == \t\h\q\2\t\7\3\o\a\h\s\8\b\x\s\e\t\m\y\6\5\s\r\i\q\p\a\0\v\b\r\a\5\h\1\8\u\g\k\l\2\g\j\4\b\k\m\7\w\w\x\v\u\i\f\w\x\o\m\r\y\7\y\f\0\e\y\u\m\h\x\j\i\h\f\r\f\v\y\9\c\k\p\y\j\z\k\s\l\b\c\f\b\7\a\c\e\7\t\b\l\p\a\8\k\s\3\4\1\s\n\0\k\s\l\d\n\t\7\f\o\9\h\e\m\y\l\d\t\m\u\u\l\n\z\1\d\8\i\l\v\2\q\h\a\w\t\8\2\p\h\2\7\2\h\o\5\r\l\6\6\j\4\d\c\t\s\a\q\e\b\9\k\0\l\o\9\p\h\y\0\8\s\k\p\l\v\9\9\o\q\n\q\9\s\l\1\h\2\w\s\m\1\n\5\l\k\8\6\x\2\h\m\1\a\f\v\c\u\6\j\g\w\b\f\f\5\1\0\9\v\h\n\f\g\2\c\y\r\5\9\p\t\5\m\6\t\v\s\n\u\0\j\d\z\a\v\h\h\c\4\j\i\s\5\v\w\1\r\r\j\g\3\o\5\2\x\3\6\f\l\v\w\w\c\v\4\g\9\w\i\n\j\l\m\m\z\f\7\9\l\p\j\t\u\g\j\2\l\o\b\2\s\s\k\m\r\l\a\r\j\x\4\u\f\d\m\h\l\s\5\5\8\a\d\1\x\h\2\5\u\f\a\5\k\4\j\c\i\u\7\b\d\r\9\2\p\i\a\l\5\5\e\8\y\6\t\i\5\n\7\8\l\f\t\n\r\u\7\2\7\k\f\u\k\m\k\x\v\7\a\h\o\d\0\r\8\k\g\p\d\1\c\t\k\e\4\r\2\s\q\4\l\f\f\r\r\d\0\6\r\u\l\l\q\t\o\m\7\1\z\f\6\9\r\o\h\j\y\q\o\s\p\n\5\q\z\k\w\9\d\5\l\y\5\c\h\u\7\3\y\w\8\i\2\m\e\0\m\1\d\a\f\m\z\i\d\v\p\6\n\l\j\n\8\6\5\9\z\p\m\o\9\q\s\1\8\9\i\u\3\m\i\q\7\a\8\c\k\k\7\i ]] 00:25:33.293 06:18:03 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:33.293 06:18:03 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:33.293 06:18:03 -- dd/common.sh@98 -- # xtrace_disable 00:25:33.293 06:18:03 -- common/autotest_common.sh@10 -- # set +x 00:25:33.293 06:18:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:33.293 06:18:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:33.552 [2024-06-11 06:18:03.982165] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:33.552 [2024-06-11 06:18:03.982389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130043 ] 00:25:33.552 [2024-06-11 06:18:04.165969] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.813 [2024-06-11 06:18:04.426564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.761  Copying: 512/512 [B] (average 500 kBps) 00:25:35.761 00:25:35.761 06:18:06 -- dd/posix.sh@93 -- # [[ yxfxufwkb8afiz1qnru838b6j0xhmxoo8coqx08krvh7gg67w8fle65isbc7zryh83wc2kthnm0oo16078oqtq0r0038g753ummsny3vqsul59x3ys9ev1ply1dkdvvxv9vuu6yu2dld1tjifuj7fy7aqg7tz0q00wtt7tkw9na7bjfvcm96mykgyphyrpn1crriakn0zf53184re3xtuktn1zsu17c2hzgger02hkzxixqifun3xjpzm6aorx44cx3o5112trnst4i4lwvz9tb7ob2kxfikeq0vw251hci0ocvxavv3o29pcr3djy23xyqbjm3c250qnmdqowgw57v88lh808q70lmnmnu8alwqlrrncodtkkmhuvmu4cg35hy9ev021zmwv7wsmkllj2txvl0waz7mxk5um1w3l2d4y4ur5hilxtixjzc3npgsgvievoi6j0zipzz8wdfaz5tfg8flstg610s45czqgzl84wagmhp6q09ker8uakr3 == \y\x\f\x\u\f\w\k\b\8\a\f\i\z\1\q\n\r\u\8\3\8\b\6\j\0\x\h\m\x\o\o\8\c\o\q\x\0\8\k\r\v\h\7\g\g\6\7\w\8\f\l\e\6\5\i\s\b\c\7\z\r\y\h\8\3\w\c\2\k\t\h\n\m\0\o\o\1\6\0\7\8\o\q\t\q\0\r\0\0\3\8\g\7\5\3\u\m\m\s\n\y\3\v\q\s\u\l\5\9\x\3\y\s\9\e\v\1\p\l\y\1\d\k\d\v\v\x\v\9\v\u\u\6\y\u\2\d\l\d\1\t\j\i\f\u\j\7\f\y\7\a\q\g\7\t\z\0\q\0\0\w\t\t\7\t\k\w\9\n\a\7\b\j\f\v\c\m\9\6\m\y\k\g\y\p\h\y\r\p\n\1\c\r\r\i\a\k\n\0\z\f\5\3\1\8\4\r\e\3\x\t\u\k\t\n\1\z\s\u\1\7\c\2\h\z\g\g\e\r\0\2\h\k\z\x\i\x\q\i\f\u\n\3\x\j\p\z\m\6\a\o\r\x\4\4\c\x\3\o\5\1\1\2\t\r\n\s\t\4\i\4\l\w\v\z\9\t\b\7\o\b\2\k\x\f\i\k\e\q\0\v\w\2\5\1\h\c\i\0\o\c\v\x\a\v\v\3\o\2\9\p\c\r\3\d\j\y\2\3\x\y\q\b\j\m\3\c\2\5\0\q\n\m\d\q\o\w\g\w\5\7\v\8\8\l\h\8\0\8\q\7\0\l\m\n\m\n\u\8\a\l\w\q\l\r\r\n\c\o\d\t\k\k\m\h\u\v\m\u\4\c\g\3\5\h\y\9\e\v\0\2\1\z\m\w\v\7\w\s\m\k\l\l\j\2\t\x\v\l\0\w\a\z\7\m\x\k\5\u\m\1\w\3\l\2\d\4\y\4\u\r\5\h\i\l\x\t\i\x\j\z\c\3\n\p\g\s\g\v\i\e\v\o\i\6\j\0\z\i\p\z\z\8\w\d\f\a\z\5\t\f\g\8\f\l\s\t\g\6\1\0\s\4\5\c\z\q\g\z\l\8\4\w\a\g\m\h\p\6\q\0\9\k\e\r\8\u\a\k\r\3 ]] 00:25:35.761 06:18:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:35.761 06:18:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:36.020 [2024-06-11 06:18:06.420747] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:36.020 [2024-06-11 06:18:06.421002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130072 ] 00:25:36.020 [2024-06-11 06:18:06.606322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.279 [2024-06-11 06:18:06.867040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.226  Copying: 512/512 [B] (average 500 kBps) 00:25:38.226 00:25:38.227 06:18:08 -- dd/posix.sh@93 -- # [[ yxfxufwkb8afiz1qnru838b6j0xhmxoo8coqx08krvh7gg67w8fle65isbc7zryh83wc2kthnm0oo16078oqtq0r0038g753ummsny3vqsul59x3ys9ev1ply1dkdvvxv9vuu6yu2dld1tjifuj7fy7aqg7tz0q00wtt7tkw9na7bjfvcm96mykgyphyrpn1crriakn0zf53184re3xtuktn1zsu17c2hzgger02hkzxixqifun3xjpzm6aorx44cx3o5112trnst4i4lwvz9tb7ob2kxfikeq0vw251hci0ocvxavv3o29pcr3djy23xyqbjm3c250qnmdqowgw57v88lh808q70lmnmnu8alwqlrrncodtkkmhuvmu4cg35hy9ev021zmwv7wsmkllj2txvl0waz7mxk5um1w3l2d4y4ur5hilxtixjzc3npgsgvievoi6j0zipzz8wdfaz5tfg8flstg610s45czqgzl84wagmhp6q09ker8uakr3 == \y\x\f\x\u\f\w\k\b\8\a\f\i\z\1\q\n\r\u\8\3\8\b\6\j\0\x\h\m\x\o\o\8\c\o\q\x\0\8\k\r\v\h\7\g\g\6\7\w\8\f\l\e\6\5\i\s\b\c\7\z\r\y\h\8\3\w\c\2\k\t\h\n\m\0\o\o\1\6\0\7\8\o\q\t\q\0\r\0\0\3\8\g\7\5\3\u\m\m\s\n\y\3\v\q\s\u\l\5\9\x\3\y\s\9\e\v\1\p\l\y\1\d\k\d\v\v\x\v\9\v\u\u\6\y\u\2\d\l\d\1\t\j\i\f\u\j\7\f\y\7\a\q\g\7\t\z\0\q\0\0\w\t\t\7\t\k\w\9\n\a\7\b\j\f\v\c\m\9\6\m\y\k\g\y\p\h\y\r\p\n\1\c\r\r\i\a\k\n\0\z\f\5\3\1\8\4\r\e\3\x\t\u\k\t\n\1\z\s\u\1\7\c\2\h\z\g\g\e\r\0\2\h\k\z\x\i\x\q\i\f\u\n\3\x\j\p\z\m\6\a\o\r\x\4\4\c\x\3\o\5\1\1\2\t\r\n\s\t\4\i\4\l\w\v\z\9\t\b\7\o\b\2\k\x\f\i\k\e\q\0\v\w\2\5\1\h\c\i\0\o\c\v\x\a\v\v\3\o\2\9\p\c\r\3\d\j\y\2\3\x\y\q\b\j\m\3\c\2\5\0\q\n\m\d\q\o\w\g\w\5\7\v\8\8\l\h\8\0\8\q\7\0\l\m\n\m\n\u\8\a\l\w\q\l\r\r\n\c\o\d\t\k\k\m\h\u\v\m\u\4\c\g\3\5\h\y\9\e\v\0\2\1\z\m\w\v\7\w\s\m\k\l\l\j\2\t\x\v\l\0\w\a\z\7\m\x\k\5\u\m\1\w\3\l\2\d\4\y\4\u\r\5\h\i\l\x\t\i\x\j\z\c\3\n\p\g\s\g\v\i\e\v\o\i\6\j\0\z\i\p\z\z\8\w\d\f\a\z\5\t\f\g\8\f\l\s\t\g\6\1\0\s\4\5\c\z\q\g\z\l\8\4\w\a\g\m\h\p\6\q\0\9\k\e\r\8\u\a\k\r\3 ]] 00:25:38.227 06:18:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:38.227 06:18:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:38.227 [2024-06-11 06:18:08.820449] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:38.227 [2024-06-11 06:18:08.820654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130100 ] 00:25:38.486 [2024-06-11 06:18:09.005735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.745 [2024-06-11 06:18:09.271391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.691  Copying: 512/512 [B] (average 250 kBps) 00:25:40.691 00:25:40.692 06:18:11 -- dd/posix.sh@93 -- # [[ yxfxufwkb8afiz1qnru838b6j0xhmxoo8coqx08krvh7gg67w8fle65isbc7zryh83wc2kthnm0oo16078oqtq0r0038g753ummsny3vqsul59x3ys9ev1ply1dkdvvxv9vuu6yu2dld1tjifuj7fy7aqg7tz0q00wtt7tkw9na7bjfvcm96mykgyphyrpn1crriakn0zf53184re3xtuktn1zsu17c2hzgger02hkzxixqifun3xjpzm6aorx44cx3o5112trnst4i4lwvz9tb7ob2kxfikeq0vw251hci0ocvxavv3o29pcr3djy23xyqbjm3c250qnmdqowgw57v88lh808q70lmnmnu8alwqlrrncodtkkmhuvmu4cg35hy9ev021zmwv7wsmkllj2txvl0waz7mxk5um1w3l2d4y4ur5hilxtixjzc3npgsgvievoi6j0zipzz8wdfaz5tfg8flstg610s45czqgzl84wagmhp6q09ker8uakr3 == \y\x\f\x\u\f\w\k\b\8\a\f\i\z\1\q\n\r\u\8\3\8\b\6\j\0\x\h\m\x\o\o\8\c\o\q\x\0\8\k\r\v\h\7\g\g\6\7\w\8\f\l\e\6\5\i\s\b\c\7\z\r\y\h\8\3\w\c\2\k\t\h\n\m\0\o\o\1\6\0\7\8\o\q\t\q\0\r\0\0\3\8\g\7\5\3\u\m\m\s\n\y\3\v\q\s\u\l\5\9\x\3\y\s\9\e\v\1\p\l\y\1\d\k\d\v\v\x\v\9\v\u\u\6\y\u\2\d\l\d\1\t\j\i\f\u\j\7\f\y\7\a\q\g\7\t\z\0\q\0\0\w\t\t\7\t\k\w\9\n\a\7\b\j\f\v\c\m\9\6\m\y\k\g\y\p\h\y\r\p\n\1\c\r\r\i\a\k\n\0\z\f\5\3\1\8\4\r\e\3\x\t\u\k\t\n\1\z\s\u\1\7\c\2\h\z\g\g\e\r\0\2\h\k\z\x\i\x\q\i\f\u\n\3\x\j\p\z\m\6\a\o\r\x\4\4\c\x\3\o\5\1\1\2\t\r\n\s\t\4\i\4\l\w\v\z\9\t\b\7\o\b\2\k\x\f\i\k\e\q\0\v\w\2\5\1\h\c\i\0\o\c\v\x\a\v\v\3\o\2\9\p\c\r\3\d\j\y\2\3\x\y\q\b\j\m\3\c\2\5\0\q\n\m\d\q\o\w\g\w\5\7\v\8\8\l\h\8\0\8\q\7\0\l\m\n\m\n\u\8\a\l\w\q\l\r\r\n\c\o\d\t\k\k\m\h\u\v\m\u\4\c\g\3\5\h\y\9\e\v\0\2\1\z\m\w\v\7\w\s\m\k\l\l\j\2\t\x\v\l\0\w\a\z\7\m\x\k\5\u\m\1\w\3\l\2\d\4\y\4\u\r\5\h\i\l\x\t\i\x\j\z\c\3\n\p\g\s\g\v\i\e\v\o\i\6\j\0\z\i\p\z\z\8\w\d\f\a\z\5\t\f\g\8\f\l\s\t\g\6\1\0\s\4\5\c\z\q\g\z\l\8\4\w\a\g\m\h\p\6\q\0\9\k\e\r\8\u\a\k\r\3 ]] 00:25:40.692 06:18:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:40.692 06:18:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:40.950 [2024-06-11 06:18:11.348399] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:40.950 [2024-06-11 06:18:11.348631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130139 ] 00:25:40.950 [2024-06-11 06:18:11.533002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.209 [2024-06-11 06:18:11.798250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.152  Copying: 512/512 [B] (average 83 kBps) 00:25:43.152 00:25:43.152 ************************************ 00:25:43.152 END TEST dd_flags_misc 00:25:43.152 ************************************ 00:25:43.152 06:18:13 -- dd/posix.sh@93 -- # [[ yxfxufwkb8afiz1qnru838b6j0xhmxoo8coqx08krvh7gg67w8fle65isbc7zryh83wc2kthnm0oo16078oqtq0r0038g753ummsny3vqsul59x3ys9ev1ply1dkdvvxv9vuu6yu2dld1tjifuj7fy7aqg7tz0q00wtt7tkw9na7bjfvcm96mykgyphyrpn1crriakn0zf53184re3xtuktn1zsu17c2hzgger02hkzxixqifun3xjpzm6aorx44cx3o5112trnst4i4lwvz9tb7ob2kxfikeq0vw251hci0ocvxavv3o29pcr3djy23xyqbjm3c250qnmdqowgw57v88lh808q70lmnmnu8alwqlrrncodtkkmhuvmu4cg35hy9ev021zmwv7wsmkllj2txvl0waz7mxk5um1w3l2d4y4ur5hilxtixjzc3npgsgvievoi6j0zipzz8wdfaz5tfg8flstg610s45czqgzl84wagmhp6q09ker8uakr3 == \y\x\f\x\u\f\w\k\b\8\a\f\i\z\1\q\n\r\u\8\3\8\b\6\j\0\x\h\m\x\o\o\8\c\o\q\x\0\8\k\r\v\h\7\g\g\6\7\w\8\f\l\e\6\5\i\s\b\c\7\z\r\y\h\8\3\w\c\2\k\t\h\n\m\0\o\o\1\6\0\7\8\o\q\t\q\0\r\0\0\3\8\g\7\5\3\u\m\m\s\n\y\3\v\q\s\u\l\5\9\x\3\y\s\9\e\v\1\p\l\y\1\d\k\d\v\v\x\v\9\v\u\u\6\y\u\2\d\l\d\1\t\j\i\f\u\j\7\f\y\7\a\q\g\7\t\z\0\q\0\0\w\t\t\7\t\k\w\9\n\a\7\b\j\f\v\c\m\9\6\m\y\k\g\y\p\h\y\r\p\n\1\c\r\r\i\a\k\n\0\z\f\5\3\1\8\4\r\e\3\x\t\u\k\t\n\1\z\s\u\1\7\c\2\h\z\g\g\e\r\0\2\h\k\z\x\i\x\q\i\f\u\n\3\x\j\p\z\m\6\a\o\r\x\4\4\c\x\3\o\5\1\1\2\t\r\n\s\t\4\i\4\l\w\v\z\9\t\b\7\o\b\2\k\x\f\i\k\e\q\0\v\w\2\5\1\h\c\i\0\o\c\v\x\a\v\v\3\o\2\9\p\c\r\3\d\j\y\2\3\x\y\q\b\j\m\3\c\2\5\0\q\n\m\d\q\o\w\g\w\5\7\v\8\8\l\h\8\0\8\q\7\0\l\m\n\m\n\u\8\a\l\w\q\l\r\r\n\c\o\d\t\k\k\m\h\u\v\m\u\4\c\g\3\5\h\y\9\e\v\0\2\1\z\m\w\v\7\w\s\m\k\l\l\j\2\t\x\v\l\0\w\a\z\7\m\x\k\5\u\m\1\w\3\l\2\d\4\y\4\u\r\5\h\i\l\x\t\i\x\j\z\c\3\n\p\g\s\g\v\i\e\v\o\i\6\j\0\z\i\p\z\z\8\w\d\f\a\z\5\t\f\g\8\f\l\s\t\g\6\1\0\s\4\5\c\z\q\g\z\l\8\4\w\a\g\m\h\p\6\q\0\9\k\e\r\8\u\a\k\r\3 ]] 00:25:43.152 00:25:43.152 real 0m19.512s 00:25:43.152 user 0m15.715s 00:25:43.152 sys 0m2.716s 00:25:43.152 06:18:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.152 06:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.411 06:18:13 -- dd/posix.sh@131 -- # tests_forced_aio 00:25:43.411 06:18:13 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:25:43.411 * Second test run, using AIO 00:25:43.411 06:18:13 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:25:43.411 06:18:13 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:25:43.411 06:18:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.411 06:18:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.411 06:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.411 ************************************ 00:25:43.411 START TEST dd_flag_append_forced_aio 00:25:43.411 ************************************ 00:25:43.411 06:18:13 -- common/autotest_common.sh@1104 -- # append 00:25:43.411 06:18:13 -- dd/posix.sh@16 -- # local dump0 00:25:43.411 06:18:13 -- dd/posix.sh@17 -- # local dump1 00:25:43.411 06:18:13 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:43.411 06:18:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:43.411 06:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.411 06:18:13 -- dd/posix.sh@19 -- # dump0=vlmo6y783r7fz6mgokwgcxgxlcy7y30g 00:25:43.411 06:18:13 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:43.411 06:18:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:43.411 06:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.411 06:18:13 -- dd/posix.sh@20 -- # dump1=tayvvehigvq42xw4lspfhq2eo07aw4bk 00:25:43.411 06:18:13 -- dd/posix.sh@22 -- # printf %s vlmo6y783r7fz6mgokwgcxgxlcy7y30g 00:25:43.411 06:18:13 -- dd/posix.sh@23 -- # printf %s tayvvehigvq42xw4lspfhq2eo07aw4bk 00:25:43.411 06:18:13 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:43.411 [2024-06-11 06:18:13.917951] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:43.411 [2024-06-11 06:18:13.918134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130190 ] 00:25:43.670 [2024-06-11 06:18:14.086632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.929 [2024-06-11 06:18:14.371171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.091  Copying: 32/32 [B] (average 31 kBps) 00:25:46.091 00:25:46.091 ************************************ 00:25:46.091 END TEST dd_flag_append_forced_aio 00:25:46.091 ************************************ 00:25:46.091 06:18:16 -- dd/posix.sh@27 -- # [[ tayvvehigvq42xw4lspfhq2eo07aw4bkvlmo6y783r7fz6mgokwgcxgxlcy7y30g == \t\a\y\v\v\e\h\i\g\v\q\4\2\x\w\4\l\s\p\f\h\q\2\e\o\0\7\a\w\4\b\k\v\l\m\o\6\y\7\8\3\r\7\f\z\6\m\g\o\k\w\g\c\x\g\x\l\c\y\7\y\3\0\g ]] 00:25:46.091 00:25:46.091 real 0m2.522s 00:25:46.091 user 0m2.051s 00:25:46.091 sys 0m0.337s 00:25:46.091 06:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:46.091 06:18:16 -- common/autotest_common.sh@10 -- # set +x 00:25:46.091 06:18:16 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:25:46.091 06:18:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:46.091 06:18:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:46.091 06:18:16 -- common/autotest_common.sh@10 -- # set +x 00:25:46.091 ************************************ 00:25:46.091 START TEST dd_flag_directory_forced_aio 00:25:46.091 ************************************ 00:25:46.091 06:18:16 -- common/autotest_common.sh@1104 -- # directory 00:25:46.091 06:18:16 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.091 06:18:16 -- common/autotest_common.sh@640 -- # local es=0 00:25:46.091 06:18:16 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.091 06:18:16 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.091 06:18:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:46.091 06:18:16 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.091 06:18:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:46.091 06:18:16 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.091 06:18:16 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:46.091 06:18:16 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:46.091 06:18:16 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:46.091 06:18:16 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:46.091 [2024-06-11 06:18:16.496752] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:46.091 [2024-06-11 06:18:16.496954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130250 ] 00:25:46.091 [2024-06-11 06:18:16.659892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.350 [2024-06-11 06:18:16.915254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.918 [2024-06-11 06:18:17.318118] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:46.918 [2024-06-11 06:18:17.318213] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:46.918 [2024-06-11 06:18:17.318255] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:47.855 [2024-06-11 06:18:18.254932] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:48.423 06:18:18 -- common/autotest_common.sh@643 -- # es=236 00:25:48.423 06:18:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:48.423 06:18:18 -- common/autotest_common.sh@652 -- # es=108 00:25:48.423 06:18:18 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:48.423 06:18:18 -- common/autotest_common.sh@660 -- # es=1 00:25:48.423 06:18:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:48.423 06:18:18 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.423 06:18:18 -- common/autotest_common.sh@640 -- # local es=0 00:25:48.423 06:18:18 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.423 06:18:18 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.423 06:18:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.423 06:18:18 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.423 06:18:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.423 06:18:18 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.423 06:18:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:48.424 06:18:18 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:48.424 06:18:18 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:48.424 06:18:18 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:48.424 [2024-06-11 06:18:18.841382] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:48.424 [2024-06-11 06:18:18.841551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130282 ] 00:25:48.424 [2024-06-11 06:18:19.003041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.683 [2024-06-11 06:18:19.258877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.251 [2024-06-11 06:18:19.648313] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:49.251 [2024-06-11 06:18:19.648418] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:49.251 [2024-06-11 06:18:19.648446] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:50.239 [2024-06-11 06:18:20.560775] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:50.498 06:18:21 -- common/autotest_common.sh@643 -- # es=236 00:25:50.498 06:18:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:50.498 06:18:21 -- common/autotest_common.sh@652 -- # es=108 00:25:50.498 06:18:21 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:50.498 06:18:21 -- common/autotest_common.sh@660 -- # es=1 00:25:50.498 06:18:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:50.498 00:25:50.498 real 0m4.634s 00:25:50.498 user 0m3.816s 00:25:50.498 sys 0m0.619s 00:25:50.498 06:18:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:50.498 06:18:21 -- common/autotest_common.sh@10 -- # set +x 00:25:50.498 ************************************ 00:25:50.498 END TEST dd_flag_directory_forced_aio 00:25:50.498 ************************************ 00:25:50.498 06:18:21 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:25:50.498 06:18:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:50.498 06:18:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:50.498 06:18:21 -- common/autotest_common.sh@10 -- # set +x 00:25:50.498 ************************************ 00:25:50.498 START TEST dd_flag_nofollow_forced_aio 00:25:50.498 ************************************ 00:25:50.498 06:18:21 -- common/autotest_common.sh@1104 -- # nofollow 00:25:50.498 06:18:21 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:50.498 06:18:21 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:50.498 06:18:21 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:50.498 06:18:21 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:50.498 06:18:21 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.498 06:18:21 -- common/autotest_common.sh@640 -- # local es=0 00:25:50.498 06:18:21 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.498 06:18:21 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.498 06:18:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:50.498 06:18:21 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.498 06:18:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:50.498 06:18:21 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.498 06:18:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:50.498 06:18:21 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:50.498 06:18:21 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:50.498 06:18:21 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:50.757 [2024-06-11 06:18:21.223110] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:50.757 [2024-06-11 06:18:21.223325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130343 ] 00:25:51.017 [2024-06-11 06:18:21.405959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.017 [2024-06-11 06:18:21.638453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.584 [2024-06-11 06:18:22.019694] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:51.584 [2024-06-11 06:18:22.019817] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:51.584 [2024-06-11 06:18:22.019844] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:52.521 [2024-06-11 06:18:22.920955] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:52.780 06:18:23 -- common/autotest_common.sh@643 -- # es=216 00:25:52.780 06:18:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:52.780 06:18:23 -- common/autotest_common.sh@652 -- # es=88 00:25:52.780 06:18:23 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:52.780 06:18:23 -- common/autotest_common.sh@660 -- # es=1 00:25:52.780 06:18:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:52.780 06:18:23 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:52.780 06:18:23 -- common/autotest_common.sh@640 -- # local es=0 00:25:52.780 06:18:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:52.780 06:18:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.039 06:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.039 06:18:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.039 06:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.039 06:18:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.040 06:18:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:53.040 06:18:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.040 06:18:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:53.040 06:18:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:53.040 [2024-06-11 06:18:23.511049] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:53.040 [2024-06-11 06:18:23.511274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130370 ] 00:25:53.299 [2024-06-11 06:18:23.693876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.299 [2024-06-11 06:18:23.935294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.867 [2024-06-11 06:18:24.323175] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:53.867 [2024-06-11 06:18:24.323275] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:53.867 [2024-06-11 06:18:24.323302] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:54.804 [2024-06-11 06:18:25.231001] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:55.373 06:18:25 -- common/autotest_common.sh@643 -- # es=216 00:25:55.373 06:18:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:55.373 06:18:25 -- common/autotest_common.sh@652 -- # es=88 00:25:55.373 06:18:25 -- common/autotest_common.sh@653 -- # case "$es" in 00:25:55.373 06:18:25 -- common/autotest_common.sh@660 -- # es=1 00:25:55.373 06:18:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:55.373 06:18:25 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:55.373 06:18:25 -- dd/common.sh@98 -- # xtrace_disable 00:25:55.373 06:18:25 -- common/autotest_common.sh@10 -- # set +x 00:25:55.373 06:18:25 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:55.373 [2024-06-11 06:18:25.826682] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:55.373 [2024-06-11 06:18:25.826893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130397 ] 00:25:55.373 [2024-06-11 06:18:26.008731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.632 [2024-06-11 06:18:26.267848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.578  Copying: 512/512 [B] (average 500 kBps) 00:25:57.578 00:25:57.579 06:18:28 -- dd/posix.sh@49 -- # [[ mqn3do63ds86b9x03y0sfh56wga8wz88c1zofwx6aikct91emu15gu3niff7vfyq2x4arf3lvpehtjmqulwfddu1dgc0192do95lv01w6tzd1xmh1eutia9v58wf0dn7kkforjxvrdfgkcunziixu6q5spi9hu1htxogtr9011fc5lh3dr630xaw4d0q75q0gi36tzd1ie02wuujeis2muohqwxy669s1g2i2wz0p66ubw4hn7v99fqkjfkvr0b40618cjwr8kcr3u78kijipfyz5wfqtgmwl3x0wcuq47atkpelgfk5ph3yk2p5m6hgvmojr02prnqhy8w9tvyycgnzv35m6vlujhnxs8w0djce0an4gsln475eufkd4bguu1wan4jnmal77j93mdjv9ic0ap6thx31hpxj5k8osy90mxfjc5ep9eylh8sr6fbnefra4dk7vsz51sfx7q1i4sh4r2ddxsj50qzfe58vosy3ba67d2k9wq0bscyy1f3o == \m\q\n\3\d\o\6\3\d\s\8\6\b\9\x\0\3\y\0\s\f\h\5\6\w\g\a\8\w\z\8\8\c\1\z\o\f\w\x\6\a\i\k\c\t\9\1\e\m\u\1\5\g\u\3\n\i\f\f\7\v\f\y\q\2\x\4\a\r\f\3\l\v\p\e\h\t\j\m\q\u\l\w\f\d\d\u\1\d\g\c\0\1\9\2\d\o\9\5\l\v\0\1\w\6\t\z\d\1\x\m\h\1\e\u\t\i\a\9\v\5\8\w\f\0\d\n\7\k\k\f\o\r\j\x\v\r\d\f\g\k\c\u\n\z\i\i\x\u\6\q\5\s\p\i\9\h\u\1\h\t\x\o\g\t\r\9\0\1\1\f\c\5\l\h\3\d\r\6\3\0\x\a\w\4\d\0\q\7\5\q\0\g\i\3\6\t\z\d\1\i\e\0\2\w\u\u\j\e\i\s\2\m\u\o\h\q\w\x\y\6\6\9\s\1\g\2\i\2\w\z\0\p\6\6\u\b\w\4\h\n\7\v\9\9\f\q\k\j\f\k\v\r\0\b\4\0\6\1\8\c\j\w\r\8\k\c\r\3\u\7\8\k\i\j\i\p\f\y\z\5\w\f\q\t\g\m\w\l\3\x\0\w\c\u\q\4\7\a\t\k\p\e\l\g\f\k\5\p\h\3\y\k\2\p\5\m\6\h\g\v\m\o\j\r\0\2\p\r\n\q\h\y\8\w\9\t\v\y\y\c\g\n\z\v\3\5\m\6\v\l\u\j\h\n\x\s\8\w\0\d\j\c\e\0\a\n\4\g\s\l\n\4\7\5\e\u\f\k\d\4\b\g\u\u\1\w\a\n\4\j\n\m\a\l\7\7\j\9\3\m\d\j\v\9\i\c\0\a\p\6\t\h\x\3\1\h\p\x\j\5\k\8\o\s\y\9\0\m\x\f\j\c\5\e\p\9\e\y\l\h\8\s\r\6\f\b\n\e\f\r\a\4\d\k\7\v\s\z\5\1\s\f\x\7\q\1\i\4\s\h\4\r\2\d\d\x\s\j\5\0\q\z\f\e\5\8\v\o\s\y\3\b\a\6\7\d\2\k\9\w\q\0\b\s\c\y\y\1\f\3\o ]] 00:25:57.579 00:25:57.579 real 0m6.967s 00:25:57.579 user 0m5.659s 00:25:57.579 sys 0m0.977s 00:25:57.579 ************************************ 00:25:57.579 END TEST dd_flag_nofollow_forced_aio 00:25:57.579 06:18:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:57.579 06:18:28 -- common/autotest_common.sh@10 -- # set +x 00:25:57.579 ************************************ 00:25:57.579 06:18:28 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:25:57.579 06:18:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:57.579 06:18:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.579 06:18:28 -- common/autotest_common.sh@10 -- # set +x 00:25:57.579 ************************************ 00:25:57.579 START TEST dd_flag_noatime_forced_aio 00:25:57.579 ************************************ 00:25:57.579 06:18:28 -- common/autotest_common.sh@1104 -- # noatime 00:25:57.579 06:18:28 -- dd/posix.sh@53 -- # local atime_if 00:25:57.579 06:18:28 -- dd/posix.sh@54 -- # local atime_of 00:25:57.579 06:18:28 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:57.579 06:18:28 -- dd/common.sh@98 -- # xtrace_disable 00:25:57.579 06:18:28 -- common/autotest_common.sh@10 -- # set +x 00:25:57.579 06:18:28 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:57.579 06:18:28 -- dd/posix.sh@60 -- # atime_if=1718086706 00:25:57.579 06:18:28 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:57.579 06:18:28 -- dd/posix.sh@61 -- # atime_of=1718086708 00:25:57.579 06:18:28 -- dd/posix.sh@66 -- # sleep 1 00:25:58.957 06:18:29 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:58.957 [2024-06-11 06:18:29.272956] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:25:58.957 [2024-06-11 06:18:29.273154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130467 ] 00:25:58.957 [2024-06-11 06:18:29.457331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.216 [2024-06-11 06:18:29.731294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.165  Copying: 512/512 [B] (average 500 kBps) 00:26:01.165 00:26:01.165 06:18:31 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:01.165 06:18:31 -- dd/posix.sh@69 -- # (( atime_if == 1718086706 )) 00:26:01.165 06:18:31 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:01.165 06:18:31 -- dd/posix.sh@70 -- # (( atime_of == 1718086708 )) 00:26:01.165 06:18:31 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:01.165 [2024-06-11 06:18:31.666208] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:01.165 [2024-06-11 06:18:31.666416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130507 ] 00:26:01.425 [2024-06-11 06:18:31.849133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.684 [2024-06-11 06:18:32.079139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.321  Copying: 512/512 [B] (average 500 kBps) 00:26:03.321 00:26:03.321 06:18:33 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:03.321 06:18:33 -- dd/posix.sh@73 -- # (( atime_if < 1718086712 )) 00:26:03.321 00:26:03.321 real 0m5.760s 00:26:03.321 user 0m3.761s 00:26:03.321 sys 0m0.722s 00:26:03.321 06:18:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.321 06:18:33 -- common/autotest_common.sh@10 -- # set +x 00:26:03.321 ************************************ 00:26:03.321 END TEST dd_flag_noatime_forced_aio 00:26:03.321 ************************************ 00:26:03.581 06:18:33 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:26:03.581 06:18:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:03.581 06:18:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.581 06:18:33 -- common/autotest_common.sh@10 -- # set +x 00:26:03.581 ************************************ 00:26:03.581 START TEST dd_flags_misc_forced_aio 00:26:03.581 ************************************ 00:26:03.581 06:18:33 -- common/autotest_common.sh@1104 -- # io 00:26:03.581 06:18:33 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:03.581 06:18:33 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:03.581 06:18:33 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:03.581 06:18:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:03.581 06:18:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:03.581 06:18:33 -- dd/common.sh@98 -- # xtrace_disable 00:26:03.581 06:18:33 -- common/autotest_common.sh@10 -- # set +x 00:26:03.581 06:18:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:03.581 06:18:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:03.581 [2024-06-11 06:18:34.060245] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:03.581 [2024-06-11 06:18:34.060399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130555 ] 00:26:03.581 [2024-06-11 06:18:34.221468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.840 [2024-06-11 06:18:34.451617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.790  Copying: 512/512 [B] (average 500 kBps) 00:26:05.790 00:26:05.790 06:18:36 -- dd/posix.sh@93 -- # [[ pypr2oah2ig66b17laxhb2og8qhneewlel26p5d8h5uzzzb8cnavmjw8qyzpxsabo8hkrork2z1ou8xo4icgmpwdvklmaogbzw5b76roa0j8rvf5luuspvi6w86bkcudn9gu8prveh97myjrnby89z5abz1vjsqvafon5tiwsykddrtcbnp6v4q1obyya0l4nbsshgttpixdki37ywt2qci8mi8iyferbri1nbytv9iuwb8p0x7lotmtdd9wg46akpak10itcrtmhq0eaguxx31iqn04m3w0hzjr483uyzmg7tsu9tywoyb62xgf4serka6hz50v5bm0u3s5a71excv5p7jba810v2gtob2h4c9qkk0sl5vd2771n7ptcr26wleceb5vvungbyr4q4soubvt7i39m1njvjzeol2vgxhy6slwmu3n15j9h9ou7fz6li8nmosc8nyvhzfo8pu8cfx1i0ehwuzmriei7v1mgw2b1wpxfmj9psrd90uk2qoq == \p\y\p\r\2\o\a\h\2\i\g\6\6\b\1\7\l\a\x\h\b\2\o\g\8\q\h\n\e\e\w\l\e\l\2\6\p\5\d\8\h\5\u\z\z\z\b\8\c\n\a\v\m\j\w\8\q\y\z\p\x\s\a\b\o\8\h\k\r\o\r\k\2\z\1\o\u\8\x\o\4\i\c\g\m\p\w\d\v\k\l\m\a\o\g\b\z\w\5\b\7\6\r\o\a\0\j\8\r\v\f\5\l\u\u\s\p\v\i\6\w\8\6\b\k\c\u\d\n\9\g\u\8\p\r\v\e\h\9\7\m\y\j\r\n\b\y\8\9\z\5\a\b\z\1\v\j\s\q\v\a\f\o\n\5\t\i\w\s\y\k\d\d\r\t\c\b\n\p\6\v\4\q\1\o\b\y\y\a\0\l\4\n\b\s\s\h\g\t\t\p\i\x\d\k\i\3\7\y\w\t\2\q\c\i\8\m\i\8\i\y\f\e\r\b\r\i\1\n\b\y\t\v\9\i\u\w\b\8\p\0\x\7\l\o\t\m\t\d\d\9\w\g\4\6\a\k\p\a\k\1\0\i\t\c\r\t\m\h\q\0\e\a\g\u\x\x\3\1\i\q\n\0\4\m\3\w\0\h\z\j\r\4\8\3\u\y\z\m\g\7\t\s\u\9\t\y\w\o\y\b\6\2\x\g\f\4\s\e\r\k\a\6\h\z\5\0\v\5\b\m\0\u\3\s\5\a\7\1\e\x\c\v\5\p\7\j\b\a\8\1\0\v\2\g\t\o\b\2\h\4\c\9\q\k\k\0\s\l\5\v\d\2\7\7\1\n\7\p\t\c\r\2\6\w\l\e\c\e\b\5\v\v\u\n\g\b\y\r\4\q\4\s\o\u\b\v\t\7\i\3\9\m\1\n\j\v\j\z\e\o\l\2\v\g\x\h\y\6\s\l\w\m\u\3\n\1\5\j\9\h\9\o\u\7\f\z\6\l\i\8\n\m\o\s\c\8\n\y\v\h\z\f\o\8\p\u\8\c\f\x\1\i\0\e\h\w\u\z\m\r\i\e\i\7\v\1\m\g\w\2\b\1\w\p\x\f\m\j\9\p\s\r\d\9\0\u\k\2\q\o\q ]] 00:26:05.790 06:18:36 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:05.790 06:18:36 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:05.790 [2024-06-11 06:18:36.340362] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:05.790 [2024-06-11 06:18:36.340535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130588 ] 00:26:06.050 [2024-06-11 06:18:36.498595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.309 [2024-06-11 06:18:36.736279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.948  Copying: 512/512 [B] (average 500 kBps) 00:26:07.948 00:26:07.948 06:18:38 -- dd/posix.sh@93 -- # [[ pypr2oah2ig66b17laxhb2og8qhneewlel26p5d8h5uzzzb8cnavmjw8qyzpxsabo8hkrork2z1ou8xo4icgmpwdvklmaogbzw5b76roa0j8rvf5luuspvi6w86bkcudn9gu8prveh97myjrnby89z5abz1vjsqvafon5tiwsykddrtcbnp6v4q1obyya0l4nbsshgttpixdki37ywt2qci8mi8iyferbri1nbytv9iuwb8p0x7lotmtdd9wg46akpak10itcrtmhq0eaguxx31iqn04m3w0hzjr483uyzmg7tsu9tywoyb62xgf4serka6hz50v5bm0u3s5a71excv5p7jba810v2gtob2h4c9qkk0sl5vd2771n7ptcr26wleceb5vvungbyr4q4soubvt7i39m1njvjzeol2vgxhy6slwmu3n15j9h9ou7fz6li8nmosc8nyvhzfo8pu8cfx1i0ehwuzmriei7v1mgw2b1wpxfmj9psrd90uk2qoq == \p\y\p\r\2\o\a\h\2\i\g\6\6\b\1\7\l\a\x\h\b\2\o\g\8\q\h\n\e\e\w\l\e\l\2\6\p\5\d\8\h\5\u\z\z\z\b\8\c\n\a\v\m\j\w\8\q\y\z\p\x\s\a\b\o\8\h\k\r\o\r\k\2\z\1\o\u\8\x\o\4\i\c\g\m\p\w\d\v\k\l\m\a\o\g\b\z\w\5\b\7\6\r\o\a\0\j\8\r\v\f\5\l\u\u\s\p\v\i\6\w\8\6\b\k\c\u\d\n\9\g\u\8\p\r\v\e\h\9\7\m\y\j\r\n\b\y\8\9\z\5\a\b\z\1\v\j\s\q\v\a\f\o\n\5\t\i\w\s\y\k\d\d\r\t\c\b\n\p\6\v\4\q\1\o\b\y\y\a\0\l\4\n\b\s\s\h\g\t\t\p\i\x\d\k\i\3\7\y\w\t\2\q\c\i\8\m\i\8\i\y\f\e\r\b\r\i\1\n\b\y\t\v\9\i\u\w\b\8\p\0\x\7\l\o\t\m\t\d\d\9\w\g\4\6\a\k\p\a\k\1\0\i\t\c\r\t\m\h\q\0\e\a\g\u\x\x\3\1\i\q\n\0\4\m\3\w\0\h\z\j\r\4\8\3\u\y\z\m\g\7\t\s\u\9\t\y\w\o\y\b\6\2\x\g\f\4\s\e\r\k\a\6\h\z\5\0\v\5\b\m\0\u\3\s\5\a\7\1\e\x\c\v\5\p\7\j\b\a\8\1\0\v\2\g\t\o\b\2\h\4\c\9\q\k\k\0\s\l\5\v\d\2\7\7\1\n\7\p\t\c\r\2\6\w\l\e\c\e\b\5\v\v\u\n\g\b\y\r\4\q\4\s\o\u\b\v\t\7\i\3\9\m\1\n\j\v\j\z\e\o\l\2\v\g\x\h\y\6\s\l\w\m\u\3\n\1\5\j\9\h\9\o\u\7\f\z\6\l\i\8\n\m\o\s\c\8\n\y\v\h\z\f\o\8\p\u\8\c\f\x\1\i\0\e\h\w\u\z\m\r\i\e\i\7\v\1\m\g\w\2\b\1\w\p\x\f\m\j\9\p\s\r\d\9\0\u\k\2\q\o\q ]] 00:26:07.948 06:18:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:07.948 06:18:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:08.208 [2024-06-11 06:18:38.637582] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:08.208 [2024-06-11 06:18:38.637788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130621 ] 00:26:08.208 [2024-06-11 06:18:38.817173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.468 [2024-06-11 06:18:39.041164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.416  Copying: 512/512 [B] (average 166 kBps) 00:26:10.416 00:26:10.416 06:18:40 -- dd/posix.sh@93 -- # [[ pypr2oah2ig66b17laxhb2og8qhneewlel26p5d8h5uzzzb8cnavmjw8qyzpxsabo8hkrork2z1ou8xo4icgmpwdvklmaogbzw5b76roa0j8rvf5luuspvi6w86bkcudn9gu8prveh97myjrnby89z5abz1vjsqvafon5tiwsykddrtcbnp6v4q1obyya0l4nbsshgttpixdki37ywt2qci8mi8iyferbri1nbytv9iuwb8p0x7lotmtdd9wg46akpak10itcrtmhq0eaguxx31iqn04m3w0hzjr483uyzmg7tsu9tywoyb62xgf4serka6hz50v5bm0u3s5a71excv5p7jba810v2gtob2h4c9qkk0sl5vd2771n7ptcr26wleceb5vvungbyr4q4soubvt7i39m1njvjzeol2vgxhy6slwmu3n15j9h9ou7fz6li8nmosc8nyvhzfo8pu8cfx1i0ehwuzmriei7v1mgw2b1wpxfmj9psrd90uk2qoq == \p\y\p\r\2\o\a\h\2\i\g\6\6\b\1\7\l\a\x\h\b\2\o\g\8\q\h\n\e\e\w\l\e\l\2\6\p\5\d\8\h\5\u\z\z\z\b\8\c\n\a\v\m\j\w\8\q\y\z\p\x\s\a\b\o\8\h\k\r\o\r\k\2\z\1\o\u\8\x\o\4\i\c\g\m\p\w\d\v\k\l\m\a\o\g\b\z\w\5\b\7\6\r\o\a\0\j\8\r\v\f\5\l\u\u\s\p\v\i\6\w\8\6\b\k\c\u\d\n\9\g\u\8\p\r\v\e\h\9\7\m\y\j\r\n\b\y\8\9\z\5\a\b\z\1\v\j\s\q\v\a\f\o\n\5\t\i\w\s\y\k\d\d\r\t\c\b\n\p\6\v\4\q\1\o\b\y\y\a\0\l\4\n\b\s\s\h\g\t\t\p\i\x\d\k\i\3\7\y\w\t\2\q\c\i\8\m\i\8\i\y\f\e\r\b\r\i\1\n\b\y\t\v\9\i\u\w\b\8\p\0\x\7\l\o\t\m\t\d\d\9\w\g\4\6\a\k\p\a\k\1\0\i\t\c\r\t\m\h\q\0\e\a\g\u\x\x\3\1\i\q\n\0\4\m\3\w\0\h\z\j\r\4\8\3\u\y\z\m\g\7\t\s\u\9\t\y\w\o\y\b\6\2\x\g\f\4\s\e\r\k\a\6\h\z\5\0\v\5\b\m\0\u\3\s\5\a\7\1\e\x\c\v\5\p\7\j\b\a\8\1\0\v\2\g\t\o\b\2\h\4\c\9\q\k\k\0\s\l\5\v\d\2\7\7\1\n\7\p\t\c\r\2\6\w\l\e\c\e\b\5\v\v\u\n\g\b\y\r\4\q\4\s\o\u\b\v\t\7\i\3\9\m\1\n\j\v\j\z\e\o\l\2\v\g\x\h\y\6\s\l\w\m\u\3\n\1\5\j\9\h\9\o\u\7\f\z\6\l\i\8\n\m\o\s\c\8\n\y\v\h\z\f\o\8\p\u\8\c\f\x\1\i\0\e\h\w\u\z\m\r\i\e\i\7\v\1\m\g\w\2\b\1\w\p\x\f\m\j\9\p\s\r\d\9\0\u\k\2\q\o\q ]] 00:26:10.416 06:18:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:10.416 06:18:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:10.416 [2024-06-11 06:18:40.930117] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:10.416 [2024-06-11 06:18:40.930324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130648 ] 00:26:10.675 [2024-06-11 06:18:41.113391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.935 [2024-06-11 06:18:41.357540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.572  Copying: 512/512 [B] (average 125 kBps) 00:26:12.572 00:26:12.572 06:18:43 -- dd/posix.sh@93 -- # [[ pypr2oah2ig66b17laxhb2og8qhneewlel26p5d8h5uzzzb8cnavmjw8qyzpxsabo8hkrork2z1ou8xo4icgmpwdvklmaogbzw5b76roa0j8rvf5luuspvi6w86bkcudn9gu8prveh97myjrnby89z5abz1vjsqvafon5tiwsykddrtcbnp6v4q1obyya0l4nbsshgttpixdki37ywt2qci8mi8iyferbri1nbytv9iuwb8p0x7lotmtdd9wg46akpak10itcrtmhq0eaguxx31iqn04m3w0hzjr483uyzmg7tsu9tywoyb62xgf4serka6hz50v5bm0u3s5a71excv5p7jba810v2gtob2h4c9qkk0sl5vd2771n7ptcr26wleceb5vvungbyr4q4soubvt7i39m1njvjzeol2vgxhy6slwmu3n15j9h9ou7fz6li8nmosc8nyvhzfo8pu8cfx1i0ehwuzmriei7v1mgw2b1wpxfmj9psrd90uk2qoq == \p\y\p\r\2\o\a\h\2\i\g\6\6\b\1\7\l\a\x\h\b\2\o\g\8\q\h\n\e\e\w\l\e\l\2\6\p\5\d\8\h\5\u\z\z\z\b\8\c\n\a\v\m\j\w\8\q\y\z\p\x\s\a\b\o\8\h\k\r\o\r\k\2\z\1\o\u\8\x\o\4\i\c\g\m\p\w\d\v\k\l\m\a\o\g\b\z\w\5\b\7\6\r\o\a\0\j\8\r\v\f\5\l\u\u\s\p\v\i\6\w\8\6\b\k\c\u\d\n\9\g\u\8\p\r\v\e\h\9\7\m\y\j\r\n\b\y\8\9\z\5\a\b\z\1\v\j\s\q\v\a\f\o\n\5\t\i\w\s\y\k\d\d\r\t\c\b\n\p\6\v\4\q\1\o\b\y\y\a\0\l\4\n\b\s\s\h\g\t\t\p\i\x\d\k\i\3\7\y\w\t\2\q\c\i\8\m\i\8\i\y\f\e\r\b\r\i\1\n\b\y\t\v\9\i\u\w\b\8\p\0\x\7\l\o\t\m\t\d\d\9\w\g\4\6\a\k\p\a\k\1\0\i\t\c\r\t\m\h\q\0\e\a\g\u\x\x\3\1\i\q\n\0\4\m\3\w\0\h\z\j\r\4\8\3\u\y\z\m\g\7\t\s\u\9\t\y\w\o\y\b\6\2\x\g\f\4\s\e\r\k\a\6\h\z\5\0\v\5\b\m\0\u\3\s\5\a\7\1\e\x\c\v\5\p\7\j\b\a\8\1\0\v\2\g\t\o\b\2\h\4\c\9\q\k\k\0\s\l\5\v\d\2\7\7\1\n\7\p\t\c\r\2\6\w\l\e\c\e\b\5\v\v\u\n\g\b\y\r\4\q\4\s\o\u\b\v\t\7\i\3\9\m\1\n\j\v\j\z\e\o\l\2\v\g\x\h\y\6\s\l\w\m\u\3\n\1\5\j\9\h\9\o\u\7\f\z\6\l\i\8\n\m\o\s\c\8\n\y\v\h\z\f\o\8\p\u\8\c\f\x\1\i\0\e\h\w\u\z\m\r\i\e\i\7\v\1\m\g\w\2\b\1\w\p\x\f\m\j\9\p\s\r\d\9\0\u\k\2\q\o\q ]] 00:26:12.572 06:18:43 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:12.572 06:18:43 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:12.572 06:18:43 -- dd/common.sh@98 -- # xtrace_disable 00:26:12.572 06:18:43 -- common/autotest_common.sh@10 -- # set +x 00:26:12.572 06:18:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:12.572 06:18:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:12.831 [2024-06-11 06:18:43.293451] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:12.831 [2024-06-11 06:18:43.293679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130680 ] 00:26:12.831 [2024-06-11 06:18:43.473513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.090 [2024-06-11 06:18:43.729201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.036  Copying: 512/512 [B] (average 500 kBps) 00:26:15.036 00:26:15.037 06:18:45 -- dd/posix.sh@93 -- # [[ oscawzmdeofyujnbfelgiwsjg1dpx6zo4vd0b6rzz8zytiqgd7rcn3fkx7pwsui5f43m26on81mdaq1b7ab0g8w6iwzldjckewhj6pu151yojibr2m9yf3ur1kzgckiuocd2dpfsbh2r73nomaj234x92iqvund9olbf9nk92zr0w9sxy1ked4lg5vr298sqbupzaumytezuwef7665sckefnkamgllqpqek7pdovtkowvh3m0pusjjponjzrishen0xcmoqoy8n4n3ct3i4obkv60j7s4kz2aqs3xtvxf8lrzxrqixjgypj1aa6030fytgzyptnkg2zvcuqibj1ncrz1w0076xie8bib5305i9qpm560us2pjy9bvit40caoesyvqq72m6ja6k64g0xfd5x7sobm5715zppjrl690krofupjlhghuio9lozj9q1872vqa6a7l312zo12j4qyucmm65e5g2myi6rf6yfwhuyqtqxy821zwh1ndektkqi == \o\s\c\a\w\z\m\d\e\o\f\y\u\j\n\b\f\e\l\g\i\w\s\j\g\1\d\p\x\6\z\o\4\v\d\0\b\6\r\z\z\8\z\y\t\i\q\g\d\7\r\c\n\3\f\k\x\7\p\w\s\u\i\5\f\4\3\m\2\6\o\n\8\1\m\d\a\q\1\b\7\a\b\0\g\8\w\6\i\w\z\l\d\j\c\k\e\w\h\j\6\p\u\1\5\1\y\o\j\i\b\r\2\m\9\y\f\3\u\r\1\k\z\g\c\k\i\u\o\c\d\2\d\p\f\s\b\h\2\r\7\3\n\o\m\a\j\2\3\4\x\9\2\i\q\v\u\n\d\9\o\l\b\f\9\n\k\9\2\z\r\0\w\9\s\x\y\1\k\e\d\4\l\g\5\v\r\2\9\8\s\q\b\u\p\z\a\u\m\y\t\e\z\u\w\e\f\7\6\6\5\s\c\k\e\f\n\k\a\m\g\l\l\q\p\q\e\k\7\p\d\o\v\t\k\o\w\v\h\3\m\0\p\u\s\j\j\p\o\n\j\z\r\i\s\h\e\n\0\x\c\m\o\q\o\y\8\n\4\n\3\c\t\3\i\4\o\b\k\v\6\0\j\7\s\4\k\z\2\a\q\s\3\x\t\v\x\f\8\l\r\z\x\r\q\i\x\j\g\y\p\j\1\a\a\6\0\3\0\f\y\t\g\z\y\p\t\n\k\g\2\z\v\c\u\q\i\b\j\1\n\c\r\z\1\w\0\0\7\6\x\i\e\8\b\i\b\5\3\0\5\i\9\q\p\m\5\6\0\u\s\2\p\j\y\9\b\v\i\t\4\0\c\a\o\e\s\y\v\q\q\7\2\m\6\j\a\6\k\6\4\g\0\x\f\d\5\x\7\s\o\b\m\5\7\1\5\z\p\p\j\r\l\6\9\0\k\r\o\f\u\p\j\l\h\g\h\u\i\o\9\l\o\z\j\9\q\1\8\7\2\v\q\a\6\a\7\l\3\1\2\z\o\1\2\j\4\q\y\u\c\m\m\6\5\e\5\g\2\m\y\i\6\r\f\6\y\f\w\h\u\y\q\t\q\x\y\8\2\1\z\w\h\1\n\d\e\k\t\k\q\i ]] 00:26:15.037 06:18:45 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:15.037 06:18:45 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:15.037 [2024-06-11 06:18:45.613940] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:15.037 [2024-06-11 06:18:45.614177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130709 ] 00:26:15.295 [2024-06-11 06:18:45.781742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.554 [2024-06-11 06:18:46.019617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.191  Copying: 512/512 [B] (average 500 kBps) 00:26:17.191 00:26:17.191 06:18:47 -- dd/posix.sh@93 -- # [[ oscawzmdeofyujnbfelgiwsjg1dpx6zo4vd0b6rzz8zytiqgd7rcn3fkx7pwsui5f43m26on81mdaq1b7ab0g8w6iwzldjckewhj6pu151yojibr2m9yf3ur1kzgckiuocd2dpfsbh2r73nomaj234x92iqvund9olbf9nk92zr0w9sxy1ked4lg5vr298sqbupzaumytezuwef7665sckefnkamgllqpqek7pdovtkowvh3m0pusjjponjzrishen0xcmoqoy8n4n3ct3i4obkv60j7s4kz2aqs3xtvxf8lrzxrqixjgypj1aa6030fytgzyptnkg2zvcuqibj1ncrz1w0076xie8bib5305i9qpm560us2pjy9bvit40caoesyvqq72m6ja6k64g0xfd5x7sobm5715zppjrl690krofupjlhghuio9lozj9q1872vqa6a7l312zo12j4qyucmm65e5g2myi6rf6yfwhuyqtqxy821zwh1ndektkqi == \o\s\c\a\w\z\m\d\e\o\f\y\u\j\n\b\f\e\l\g\i\w\s\j\g\1\d\p\x\6\z\o\4\v\d\0\b\6\r\z\z\8\z\y\t\i\q\g\d\7\r\c\n\3\f\k\x\7\p\w\s\u\i\5\f\4\3\m\2\6\o\n\8\1\m\d\a\q\1\b\7\a\b\0\g\8\w\6\i\w\z\l\d\j\c\k\e\w\h\j\6\p\u\1\5\1\y\o\j\i\b\r\2\m\9\y\f\3\u\r\1\k\z\g\c\k\i\u\o\c\d\2\d\p\f\s\b\h\2\r\7\3\n\o\m\a\j\2\3\4\x\9\2\i\q\v\u\n\d\9\o\l\b\f\9\n\k\9\2\z\r\0\w\9\s\x\y\1\k\e\d\4\l\g\5\v\r\2\9\8\s\q\b\u\p\z\a\u\m\y\t\e\z\u\w\e\f\7\6\6\5\s\c\k\e\f\n\k\a\m\g\l\l\q\p\q\e\k\7\p\d\o\v\t\k\o\w\v\h\3\m\0\p\u\s\j\j\p\o\n\j\z\r\i\s\h\e\n\0\x\c\m\o\q\o\y\8\n\4\n\3\c\t\3\i\4\o\b\k\v\6\0\j\7\s\4\k\z\2\a\q\s\3\x\t\v\x\f\8\l\r\z\x\r\q\i\x\j\g\y\p\j\1\a\a\6\0\3\0\f\y\t\g\z\y\p\t\n\k\g\2\z\v\c\u\q\i\b\j\1\n\c\r\z\1\w\0\0\7\6\x\i\e\8\b\i\b\5\3\0\5\i\9\q\p\m\5\6\0\u\s\2\p\j\y\9\b\v\i\t\4\0\c\a\o\e\s\y\v\q\q\7\2\m\6\j\a\6\k\6\4\g\0\x\f\d\5\x\7\s\o\b\m\5\7\1\5\z\p\p\j\r\l\6\9\0\k\r\o\f\u\p\j\l\h\g\h\u\i\o\9\l\o\z\j\9\q\1\8\7\2\v\q\a\6\a\7\l\3\1\2\z\o\1\2\j\4\q\y\u\c\m\m\6\5\e\5\g\2\m\y\i\6\r\f\6\y\f\w\h\u\y\q\t\q\x\y\8\2\1\z\w\h\1\n\d\e\k\t\k\q\i ]] 00:26:17.191 06:18:47 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:17.191 06:18:47 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:17.453 [2024-06-11 06:18:47.880603] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:17.453 [2024-06-11 06:18:47.880786] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130745 ] 00:26:17.453 [2024-06-11 06:18:48.041642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.712 [2024-06-11 06:18:48.274766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.657  Copying: 512/512 [B] (average 250 kBps) 00:26:19.657 00:26:19.657 06:18:50 -- dd/posix.sh@93 -- # [[ oscawzmdeofyujnbfelgiwsjg1dpx6zo4vd0b6rzz8zytiqgd7rcn3fkx7pwsui5f43m26on81mdaq1b7ab0g8w6iwzldjckewhj6pu151yojibr2m9yf3ur1kzgckiuocd2dpfsbh2r73nomaj234x92iqvund9olbf9nk92zr0w9sxy1ked4lg5vr298sqbupzaumytezuwef7665sckefnkamgllqpqek7pdovtkowvh3m0pusjjponjzrishen0xcmoqoy8n4n3ct3i4obkv60j7s4kz2aqs3xtvxf8lrzxrqixjgypj1aa6030fytgzyptnkg2zvcuqibj1ncrz1w0076xie8bib5305i9qpm560us2pjy9bvit40caoesyvqq72m6ja6k64g0xfd5x7sobm5715zppjrl690krofupjlhghuio9lozj9q1872vqa6a7l312zo12j4qyucmm65e5g2myi6rf6yfwhuyqtqxy821zwh1ndektkqi == \o\s\c\a\w\z\m\d\e\o\f\y\u\j\n\b\f\e\l\g\i\w\s\j\g\1\d\p\x\6\z\o\4\v\d\0\b\6\r\z\z\8\z\y\t\i\q\g\d\7\r\c\n\3\f\k\x\7\p\w\s\u\i\5\f\4\3\m\2\6\o\n\8\1\m\d\a\q\1\b\7\a\b\0\g\8\w\6\i\w\z\l\d\j\c\k\e\w\h\j\6\p\u\1\5\1\y\o\j\i\b\r\2\m\9\y\f\3\u\r\1\k\z\g\c\k\i\u\o\c\d\2\d\p\f\s\b\h\2\r\7\3\n\o\m\a\j\2\3\4\x\9\2\i\q\v\u\n\d\9\o\l\b\f\9\n\k\9\2\z\r\0\w\9\s\x\y\1\k\e\d\4\l\g\5\v\r\2\9\8\s\q\b\u\p\z\a\u\m\y\t\e\z\u\w\e\f\7\6\6\5\s\c\k\e\f\n\k\a\m\g\l\l\q\p\q\e\k\7\p\d\o\v\t\k\o\w\v\h\3\m\0\p\u\s\j\j\p\o\n\j\z\r\i\s\h\e\n\0\x\c\m\o\q\o\y\8\n\4\n\3\c\t\3\i\4\o\b\k\v\6\0\j\7\s\4\k\z\2\a\q\s\3\x\t\v\x\f\8\l\r\z\x\r\q\i\x\j\g\y\p\j\1\a\a\6\0\3\0\f\y\t\g\z\y\p\t\n\k\g\2\z\v\c\u\q\i\b\j\1\n\c\r\z\1\w\0\0\7\6\x\i\e\8\b\i\b\5\3\0\5\i\9\q\p\m\5\6\0\u\s\2\p\j\y\9\b\v\i\t\4\0\c\a\o\e\s\y\v\q\q\7\2\m\6\j\a\6\k\6\4\g\0\x\f\d\5\x\7\s\o\b\m\5\7\1\5\z\p\p\j\r\l\6\9\0\k\r\o\f\u\p\j\l\h\g\h\u\i\o\9\l\o\z\j\9\q\1\8\7\2\v\q\a\6\a\7\l\3\1\2\z\o\1\2\j\4\q\y\u\c\m\m\6\5\e\5\g\2\m\y\i\6\r\f\6\y\f\w\h\u\y\q\t\q\x\y\8\2\1\z\w\h\1\n\d\e\k\t\k\q\i ]] 00:26:19.657 06:18:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:19.657 06:18:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:19.657 [2024-06-11 06:18:50.194370] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:19.657 [2024-06-11 06:18:50.194596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130774 ] 00:26:19.916 [2024-06-11 06:18:50.375929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.174 [2024-06-11 06:18:50.603428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.877  Copying: 512/512 [B] (average 166 kBps) 00:26:21.877 00:26:21.877 06:18:52 -- dd/posix.sh@93 -- # [[ oscawzmdeofyujnbfelgiwsjg1dpx6zo4vd0b6rzz8zytiqgd7rcn3fkx7pwsui5f43m26on81mdaq1b7ab0g8w6iwzldjckewhj6pu151yojibr2m9yf3ur1kzgckiuocd2dpfsbh2r73nomaj234x92iqvund9olbf9nk92zr0w9sxy1ked4lg5vr298sqbupzaumytezuwef7665sckefnkamgllqpqek7pdovtkowvh3m0pusjjponjzrishen0xcmoqoy8n4n3ct3i4obkv60j7s4kz2aqs3xtvxf8lrzxrqixjgypj1aa6030fytgzyptnkg2zvcuqibj1ncrz1w0076xie8bib5305i9qpm560us2pjy9bvit40caoesyvqq72m6ja6k64g0xfd5x7sobm5715zppjrl690krofupjlhghuio9lozj9q1872vqa6a7l312zo12j4qyucmm65e5g2myi6rf6yfwhuyqtqxy821zwh1ndektkqi == \o\s\c\a\w\z\m\d\e\o\f\y\u\j\n\b\f\e\l\g\i\w\s\j\g\1\d\p\x\6\z\o\4\v\d\0\b\6\r\z\z\8\z\y\t\i\q\g\d\7\r\c\n\3\f\k\x\7\p\w\s\u\i\5\f\4\3\m\2\6\o\n\8\1\m\d\a\q\1\b\7\a\b\0\g\8\w\6\i\w\z\l\d\j\c\k\e\w\h\j\6\p\u\1\5\1\y\o\j\i\b\r\2\m\9\y\f\3\u\r\1\k\z\g\c\k\i\u\o\c\d\2\d\p\f\s\b\h\2\r\7\3\n\o\m\a\j\2\3\4\x\9\2\i\q\v\u\n\d\9\o\l\b\f\9\n\k\9\2\z\r\0\w\9\s\x\y\1\k\e\d\4\l\g\5\v\r\2\9\8\s\q\b\u\p\z\a\u\m\y\t\e\z\u\w\e\f\7\6\6\5\s\c\k\e\f\n\k\a\m\g\l\l\q\p\q\e\k\7\p\d\o\v\t\k\o\w\v\h\3\m\0\p\u\s\j\j\p\o\n\j\z\r\i\s\h\e\n\0\x\c\m\o\q\o\y\8\n\4\n\3\c\t\3\i\4\o\b\k\v\6\0\j\7\s\4\k\z\2\a\q\s\3\x\t\v\x\f\8\l\r\z\x\r\q\i\x\j\g\y\p\j\1\a\a\6\0\3\0\f\y\t\g\z\y\p\t\n\k\g\2\z\v\c\u\q\i\b\j\1\n\c\r\z\1\w\0\0\7\6\x\i\e\8\b\i\b\5\3\0\5\i\9\q\p\m\5\6\0\u\s\2\p\j\y\9\b\v\i\t\4\0\c\a\o\e\s\y\v\q\q\7\2\m\6\j\a\6\k\6\4\g\0\x\f\d\5\x\7\s\o\b\m\5\7\1\5\z\p\p\j\r\l\6\9\0\k\r\o\f\u\p\j\l\h\g\h\u\i\o\9\l\o\z\j\9\q\1\8\7\2\v\q\a\6\a\7\l\3\1\2\z\o\1\2\j\4\q\y\u\c\m\m\6\5\e\5\g\2\m\y\i\6\r\f\6\y\f\w\h\u\y\q\t\q\x\y\8\2\1\z\w\h\1\n\d\e\k\t\k\q\i ]] 00:26:21.877 00:26:21.877 real 0m18.451s 00:26:21.877 user 0m14.912s 00:26:21.877 sys 0m2.451s 00:26:21.877 06:18:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.877 06:18:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.877 ************************************ 00:26:21.877 END TEST dd_flags_misc_forced_aio 00:26:21.877 ************************************ 00:26:21.877 06:18:52 -- dd/posix.sh@1 -- # cleanup 00:26:21.877 06:18:52 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:21.877 06:18:52 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:21.877 00:26:21.877 real 1m18.711s 00:26:21.877 user 1m1.783s 00:26:21.877 sys 0m10.871s 00:26:21.877 06:18:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.877 06:18:52 -- common/autotest_common.sh@10 -- # set +x 00:26:21.877 ************************************ 00:26:21.877 END TEST spdk_dd_posix 00:26:21.877 ************************************ 00:26:22.135 06:18:52 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:22.135 06:18:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.135 06:18:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.135 06:18:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.135 ************************************ 00:26:22.135 START TEST spdk_dd_malloc 00:26:22.135 ************************************ 00:26:22.135 06:18:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:22.135 * Looking for test storage... 00:26:22.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:22.135 06:18:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:22.135 06:18:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.135 06:18:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.135 06:18:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:22.135 06:18:52 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:22.136 06:18:52 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:22.136 06:18:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:22.136 06:18:52 -- paths/export.sh@5 -- # export PATH 00:26:22.136 06:18:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:22.136 06:18:52 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:26:22.136 06:18:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:22.136 06:18:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:22.136 06:18:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.136 ************************************ 00:26:22.136 START TEST dd_malloc_copy 00:26:22.136 ************************************ 00:26:22.136 06:18:52 -- common/autotest_common.sh@1104 -- # malloc_copy 00:26:22.136 06:18:52 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:26:22.136 06:18:52 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:26:22.136 06:18:52 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:26:22.136 06:18:52 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:26:22.136 06:18:52 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:26:22.136 06:18:52 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:26:22.136 06:18:52 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:26:22.136 06:18:52 -- dd/malloc.sh@28 -- # gen_conf 00:26:22.136 06:18:52 -- dd/common.sh@31 -- # xtrace_disable 00:26:22.136 06:18:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.136 { 00:26:22.136 "subsystems": [ 00:26:22.136 { 00:26:22.136 "subsystem": "bdev", 00:26:22.136 "config": [ 00:26:22.136 { 00:26:22.136 "params": { 00:26:22.136 "block_size": 512, 00:26:22.136 "num_blocks": 1048576, 00:26:22.136 "name": "malloc0" 00:26:22.136 }, 00:26:22.136 "method": "bdev_malloc_create" 00:26:22.136 }, 00:26:22.136 { 00:26:22.136 "params": { 00:26:22.136 "block_size": 512, 00:26:22.136 "num_blocks": 1048576, 00:26:22.136 "name": "malloc1" 00:26:22.136 }, 00:26:22.136 "method": "bdev_malloc_create" 00:26:22.136 }, 00:26:22.136 { 00:26:22.136 "method": "bdev_wait_for_examine" 00:26:22.136 } 00:26:22.136 ] 00:26:22.136 } 00:26:22.136 ] 00:26:22.136 } 00:26:22.136 [2024-06-11 06:18:52.775096] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:22.136 [2024-06-11 06:18:52.775304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130871 ] 00:26:22.394 [2024-06-11 06:18:52.951242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.654 [2024-06-11 06:18:53.191596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.784  Copying: 239/512 [MB] (239 MBps) Copying: 477/512 [MB] (238 MBps) Copying: 512/512 [MB] (average 238 MBps) 00:26:31.784 00:26:31.784 06:19:01 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:26:31.784 06:19:01 -- dd/malloc.sh@33 -- # gen_conf 00:26:31.784 06:19:01 -- dd/common.sh@31 -- # xtrace_disable 00:26:31.784 06:19:01 -- common/autotest_common.sh@10 -- # set +x 00:26:31.784 { 00:26:31.784 "subsystems": [ 00:26:31.784 { 00:26:31.784 "subsystem": "bdev", 00:26:31.784 "config": [ 00:26:31.784 { 00:26:31.784 "params": { 00:26:31.784 "block_size": 512, 00:26:31.784 "num_blocks": 1048576, 00:26:31.784 "name": "malloc0" 00:26:31.784 }, 00:26:31.784 "method": "bdev_malloc_create" 00:26:31.784 }, 00:26:31.784 { 00:26:31.784 "params": { 00:26:31.784 "block_size": 512, 00:26:31.784 "num_blocks": 1048576, 00:26:31.784 "name": "malloc1" 00:26:31.784 }, 00:26:31.784 "method": "bdev_malloc_create" 00:26:31.784 }, 00:26:31.784 { 00:26:31.784 "method": "bdev_wait_for_examine" 00:26:31.784 } 00:26:31.784 ] 00:26:31.784 } 00:26:31.784 ] 00:26:31.784 } 00:26:31.784 [2024-06-11 06:19:01.712158] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:31.784 [2024-06-11 06:19:01.712375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130983 ] 00:26:31.784 [2024-06-11 06:19:01.894341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.784 [2024-06-11 06:19:02.161034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.891  Copying: 239/512 [MB] (239 MBps) Copying: 480/512 [MB] (241 MBps) Copying: 512/512 [MB] (average 240 MBps) 00:26:39.891 00:26:40.151 00:26:40.151 real 0m17.879s 00:26:40.151 user 0m15.994s 00:26:40.151 sys 0m1.749s 00:26:40.151 06:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.151 06:19:10 -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 ************************************ 00:26:40.151 END TEST dd_malloc_copy 00:26:40.151 ************************************ 00:26:40.151 00:26:40.151 real 0m18.054s 00:26:40.151 user 0m16.075s 00:26:40.151 sys 0m1.856s 00:26:40.151 06:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.151 06:19:10 -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 ************************************ 00:26:40.151 END TEST spdk_dd_malloc 00:26:40.151 ************************************ 00:26:40.151 06:19:10 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:40.151 06:19:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:40.151 06:19:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:40.151 06:19:10 -- common/autotest_common.sh@10 -- # set +x 00:26:40.151 ************************************ 00:26:40.151 START TEST spdk_dd_bdev_to_bdev 00:26:40.151 ************************************ 00:26:40.151 06:19:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:40.151 * Looking for test storage... 00:26:40.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:40.151 06:19:10 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.151 06:19:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.151 06:19:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.151 06:19:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.151 06:19:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.151 06:19:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.151 06:19:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.151 06:19:10 -- paths/export.sh@5 -- # export PATH 00:26:40.151 06:19:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:26:40.151 06:19:10 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:26:40.410 [2024-06-11 06:19:10.869970] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:40.410 [2024-06-11 06:19:10.870182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131155 ] 00:26:40.410 [2024-06-11 06:19:11.053410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.668 [2024-06-11 06:19:11.280962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.003  Copying: 256/256 [MB] (average 1036 MBps) 00:26:43.003 00:26:43.003 06:19:13 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:43.003 06:19:13 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:43.003 06:19:13 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:26:43.003 06:19:13 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:26:43.003 06:19:13 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:43.003 06:19:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:26:43.003 06:19:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:43.003 06:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.003 ************************************ 00:26:43.003 START TEST dd_inflate_file 00:26:43.003 ************************************ 00:26:43.003 06:19:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:43.003 [2024-06-11 06:19:13.438588] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:43.003 [2024-06-11 06:19:13.438756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131194 ] 00:26:43.003 [2024-06-11 06:19:13.599532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.262 [2024-06-11 06:19:13.827155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.207  Copying: 64/64 [MB] (average 901 MBps) 00:26:45.207 00:26:45.207 00:26:45.207 real 0m2.348s 00:26:45.207 user 0m1.837s 00:26:45.207 sys 0m0.381s 00:26:45.207 06:19:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:45.207 ************************************ 00:26:45.207 END TEST dd_inflate_file 00:26:45.207 ************************************ 00:26:45.207 06:19:15 -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 06:19:15 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:26:45.207 06:19:15 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:26:45.207 06:19:15 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:45.207 06:19:15 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:26:45.207 06:19:15 -- dd/common.sh@31 -- # xtrace_disable 00:26:45.207 06:19:15 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:26:45.207 06:19:15 -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 06:19:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:45.207 06:19:15 -- common/autotest_common.sh@10 -- # set +x 00:26:45.207 ************************************ 00:26:45.207 START TEST dd_copy_to_out_bdev 00:26:45.207 ************************************ 00:26:45.207 06:19:15 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:45.207 { 00:26:45.207 "subsystems": [ 00:26:45.207 { 00:26:45.207 "subsystem": "bdev", 00:26:45.207 "config": [ 00:26:45.207 { 00:26:45.207 "params": { 00:26:45.207 "block_size": 4096, 00:26:45.207 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:45.208 "name": "aio1" 00:26:45.208 }, 00:26:45.208 "method": "bdev_aio_create" 00:26:45.208 }, 00:26:45.208 { 00:26:45.208 "params": { 00:26:45.208 "trtype": "pcie", 00:26:45.208 "traddr": "0000:00:06.0", 00:26:45.208 "name": "Nvme0" 00:26:45.208 }, 00:26:45.208 "method": "bdev_nvme_attach_controller" 00:26:45.208 }, 00:26:45.208 { 00:26:45.208 "method": "bdev_wait_for_examine" 00:26:45.208 } 00:26:45.208 ] 00:26:45.208 } 00:26:45.208 ] 00:26:45.208 } 00:26:45.468 [2024-06-11 06:19:15.884789] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:45.468 [2024-06-11 06:19:15.885025] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131257 ] 00:26:45.468 [2024-06-11 06:19:16.067065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.749 [2024-06-11 06:19:16.302275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.523  Copying: 64/64 [MB] (average 75 MBps) 00:26:48.523 00:26:48.523 00:26:48.523 real 0m3.319s 00:26:48.523 user 0m2.847s 00:26:48.523 sys 0m0.363s 00:26:48.523 06:19:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:48.524 06:19:19 -- common/autotest_common.sh@10 -- # set +x 00:26:48.524 ************************************ 00:26:48.524 END TEST dd_copy_to_out_bdev 00:26:48.524 ************************************ 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:26:48.782 06:19:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:48.782 06:19:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:48.782 06:19:19 -- common/autotest_common.sh@10 -- # set +x 00:26:48.782 ************************************ 00:26:48.782 START TEST dd_offset_magic 00:26:48.782 ************************************ 00:26:48.782 06:19:19 -- common/autotest_common.sh@1104 -- # offset_magic 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:26:48.782 06:19:19 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:48.782 06:19:19 -- dd/common.sh@31 -- # xtrace_disable 00:26:48.782 06:19:19 -- common/autotest_common.sh@10 -- # set +x 00:26:48.782 { 00:26:48.782 "subsystems": [ 00:26:48.782 { 00:26:48.782 "subsystem": "bdev", 00:26:48.782 "config": [ 00:26:48.782 { 00:26:48.782 "params": { 00:26:48.782 "block_size": 4096, 00:26:48.782 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:48.782 "name": "aio1" 00:26:48.782 }, 00:26:48.782 "method": "bdev_aio_create" 00:26:48.782 }, 00:26:48.782 { 00:26:48.782 "params": { 00:26:48.782 "trtype": "pcie", 00:26:48.782 "traddr": "0000:00:06.0", 00:26:48.782 "name": "Nvme0" 00:26:48.782 }, 00:26:48.782 "method": "bdev_nvme_attach_controller" 00:26:48.782 }, 00:26:48.782 { 00:26:48.783 "method": "bdev_wait_for_examine" 00:26:48.783 } 00:26:48.783 ] 00:26:48.783 } 00:26:48.783 ] 00:26:48.783 } 00:26:48.783 [2024-06-11 06:19:19.282063] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:48.783 [2024-06-11 06:19:19.282272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131317 ] 00:26:49.041 [2024-06-11 06:19:19.465541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.300 [2024-06-11 06:19:19.720457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.615  Copying: 65/65 [MB] (average 145 MBps) 00:26:51.615 00:26:51.874 06:19:22 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:26:51.874 06:19:22 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:51.874 06:19:22 -- dd/common.sh@31 -- # xtrace_disable 00:26:51.874 06:19:22 -- common/autotest_common.sh@10 -- # set +x 00:26:51.874 { 00:26:51.874 "subsystems": [ 00:26:51.874 { 00:26:51.874 "subsystem": "bdev", 00:26:51.874 "config": [ 00:26:51.874 { 00:26:51.874 "params": { 00:26:51.874 "block_size": 4096, 00:26:51.874 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:51.874 "name": "aio1" 00:26:51.874 }, 00:26:51.874 "method": "bdev_aio_create" 00:26:51.874 }, 00:26:51.874 { 00:26:51.874 "params": { 00:26:51.874 "trtype": "pcie", 00:26:51.874 "traddr": "0000:00:06.0", 00:26:51.874 "name": "Nvme0" 00:26:51.874 }, 00:26:51.874 "method": "bdev_nvme_attach_controller" 00:26:51.874 }, 00:26:51.874 { 00:26:51.874 "method": "bdev_wait_for_examine" 00:26:51.874 } 00:26:51.874 ] 00:26:51.874 } 00:26:51.874 ] 00:26:51.874 } 00:26:51.874 [2024-06-11 06:19:22.346825] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:51.874 [2024-06-11 06:19:22.346976] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131367 ] 00:26:51.874 [2024-06-11 06:19:22.509450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.133 [2024-06-11 06:19:22.755938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.081  Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:54.081 00:26:54.340 06:19:24 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:54.340 06:19:24 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:54.340 06:19:24 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:54.340 06:19:24 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:26:54.340 06:19:24 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:54.340 06:19:24 -- dd/common.sh@31 -- # xtrace_disable 00:26:54.340 06:19:24 -- common/autotest_common.sh@10 -- # set +x 00:26:54.340 { 00:26:54.340 "subsystems": [ 00:26:54.340 { 00:26:54.340 "subsystem": "bdev", 00:26:54.340 "config": [ 00:26:54.340 { 00:26:54.340 "params": { 00:26:54.340 "block_size": 4096, 00:26:54.340 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:54.340 "name": "aio1" 00:26:54.340 }, 00:26:54.340 "method": "bdev_aio_create" 00:26:54.340 }, 00:26:54.340 { 00:26:54.340 "params": { 00:26:54.340 "trtype": "pcie", 00:26:54.340 "traddr": "0000:00:06.0", 00:26:54.341 "name": "Nvme0" 00:26:54.341 }, 00:26:54.341 "method": "bdev_nvme_attach_controller" 00:26:54.341 }, 00:26:54.341 { 00:26:54.341 "method": "bdev_wait_for_examine" 00:26:54.341 } 00:26:54.341 ] 00:26:54.341 } 00:26:54.341 ] 00:26:54.341 } 00:26:54.341 [2024-06-11 06:19:24.813759] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:54.341 [2024-06-11 06:19:24.814536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131408 ] 00:26:54.600 [2024-06-11 06:19:24.996381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.600 [2024-06-11 06:19:25.223921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.917  Copying: 65/65 [MB] (average 191 MBps) 00:26:56.917 00:26:56.917 06:19:27 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:26:56.917 06:19:27 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:56.917 06:19:27 -- dd/common.sh@31 -- # xtrace_disable 00:26:56.917 06:19:27 -- common/autotest_common.sh@10 -- # set +x 00:26:57.176 { 00:26:57.176 "subsystems": [ 00:26:57.176 { 00:26:57.176 "subsystem": "bdev", 00:26:57.176 "config": [ 00:26:57.176 { 00:26:57.176 "params": { 00:26:57.176 "block_size": 4096, 00:26:57.176 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:57.176 "name": "aio1" 00:26:57.176 }, 00:26:57.176 "method": "bdev_aio_create" 00:26:57.176 }, 00:26:57.176 { 00:26:57.176 "params": { 00:26:57.176 "trtype": "pcie", 00:26:57.176 "traddr": "0000:00:06.0", 00:26:57.176 "name": "Nvme0" 00:26:57.176 }, 00:26:57.176 "method": "bdev_nvme_attach_controller" 00:26:57.176 }, 00:26:57.176 { 00:26:57.176 "method": "bdev_wait_for_examine" 00:26:57.176 } 00:26:57.176 ] 00:26:57.176 } 00:26:57.176 ] 00:26:57.176 } 00:26:57.176 [2024-06-11 06:19:27.590406] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:57.176 [2024-06-11 06:19:27.590619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131444 ] 00:26:57.176 [2024-06-11 06:19:27.773883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.435 [2024-06-11 06:19:28.011741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.384  Copying: 1024/1024 [kB] (average 500 MBps) 00:26:59.384 00:26:59.384 06:19:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:59.384 06:19:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:59.384 00:26:59.384 real 0m10.805s 00:26:59.384 user 0m8.237s 00:26:59.384 sys 0m1.470s 00:26:59.384 06:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.384 ************************************ 00:26:59.384 END TEST dd_offset_magic 00:26:59.384 ************************************ 00:26:59.384 06:19:29 -- common/autotest_common.sh@10 -- # set +x 00:26:59.642 06:19:30 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:26:59.642 06:19:30 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:26:59.643 06:19:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:59.643 06:19:30 -- dd/common.sh@11 -- # local nvme_ref= 00:26:59.643 06:19:30 -- dd/common.sh@12 -- # local size=4194330 00:26:59.643 06:19:30 -- dd/common.sh@14 -- # local bs=1048576 00:26:59.643 06:19:30 -- dd/common.sh@15 -- # local count=5 00:26:59.643 06:19:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:26:59.643 06:19:30 -- dd/common.sh@18 -- # gen_conf 00:26:59.643 06:19:30 -- dd/common.sh@31 -- # xtrace_disable 00:26:59.643 06:19:30 -- common/autotest_common.sh@10 -- # set +x 00:26:59.643 { 00:26:59.643 "subsystems": [ 00:26:59.643 { 00:26:59.643 "subsystem": "bdev", 00:26:59.643 "config": [ 00:26:59.643 { 00:26:59.643 "params": { 00:26:59.643 "block_size": 4096, 00:26:59.643 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:59.643 "name": "aio1" 00:26:59.643 }, 00:26:59.643 "method": "bdev_aio_create" 00:26:59.643 }, 00:26:59.643 { 00:26:59.643 "params": { 00:26:59.643 "trtype": "pcie", 00:26:59.643 "traddr": "0000:00:06.0", 00:26:59.643 "name": "Nvme0" 00:26:59.643 }, 00:26:59.643 "method": "bdev_nvme_attach_controller" 00:26:59.643 }, 00:26:59.643 { 00:26:59.643 "method": "bdev_wait_for_examine" 00:26:59.643 } 00:26:59.643 ] 00:26:59.643 } 00:26:59.643 ] 00:26:59.643 } 00:26:59.643 [2024-06-11 06:19:30.119173] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:26:59.643 [2024-06-11 06:19:30.119340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131500 ] 00:26:59.643 [2024-06-11 06:19:30.282255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.211 [2024-06-11 06:19:30.550992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.382  Copying: 5120/5120 [kB] (average 1250 MBps) 00:27:02.382 00:27:02.382 06:19:32 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:27:02.382 06:19:32 -- dd/common.sh@10 -- # local bdev=aio1 00:27:02.382 06:19:32 -- dd/common.sh@11 -- # local nvme_ref= 00:27:02.382 06:19:32 -- dd/common.sh@12 -- # local size=4194330 00:27:02.382 06:19:32 -- dd/common.sh@14 -- # local bs=1048576 00:27:02.382 06:19:32 -- dd/common.sh@15 -- # local count=5 00:27:02.382 06:19:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:27:02.382 06:19:32 -- dd/common.sh@18 -- # gen_conf 00:27:02.382 06:19:32 -- dd/common.sh@31 -- # xtrace_disable 00:27:02.382 06:19:32 -- common/autotest_common.sh@10 -- # set +x 00:27:02.382 { 00:27:02.382 "subsystems": [ 00:27:02.382 { 00:27:02.382 "subsystem": "bdev", 00:27:02.382 "config": [ 00:27:02.382 { 00:27:02.382 "params": { 00:27:02.382 "block_size": 4096, 00:27:02.382 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:27:02.382 "name": "aio1" 00:27:02.382 }, 00:27:02.382 "method": "bdev_aio_create" 00:27:02.382 }, 00:27:02.382 { 00:27:02.382 "params": { 00:27:02.382 "trtype": "pcie", 00:27:02.382 "traddr": "0000:00:06.0", 00:27:02.382 "name": "Nvme0" 00:27:02.382 }, 00:27:02.382 "method": "bdev_nvme_attach_controller" 00:27:02.382 }, 00:27:02.382 { 00:27:02.382 "method": "bdev_wait_for_examine" 00:27:02.382 } 00:27:02.382 ] 00:27:02.382 } 00:27:02.382 ] 00:27:02.382 } 00:27:02.382 [2024-06-11 06:19:32.641681] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:02.382 [2024-06-11 06:19:32.642353] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131534 ] 00:27:02.382 [2024-06-11 06:19:32.832999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.641 [2024-06-11 06:19:33.095579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.587  Copying: 5120/5120 [kB] (average 185 MBps) 00:27:04.587 00:27:04.587 06:19:35 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:27:04.587 00:27:04.587 real 0m24.535s 00:27:04.587 user 0m19.000s 00:27:04.587 sys 0m3.817s 00:27:04.587 06:19:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.587 06:19:35 -- common/autotest_common.sh@10 -- # set +x 00:27:04.587 ************************************ 00:27:04.587 END TEST spdk_dd_bdev_to_bdev 00:27:04.587 ************************************ 00:27:04.847 06:19:35 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:27:04.847 06:19:35 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:04.847 06:19:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:04.847 06:19:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.847 06:19:35 -- common/autotest_common.sh@10 -- # set +x 00:27:04.847 ************************************ 00:27:04.847 START TEST spdk_dd_sparse 00:27:04.847 ************************************ 00:27:04.847 06:19:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:27:04.847 * Looking for test storage... 00:27:04.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:04.847 06:19:35 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:04.847 06:19:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.847 06:19:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.847 06:19:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.847 06:19:35 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:04.847 06:19:35 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:04.847 06:19:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:04.847 06:19:35 -- paths/export.sh@5 -- # export PATH 00:27:04.847 06:19:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:04.847 06:19:35 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:27:04.847 06:19:35 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:27:04.847 06:19:35 -- dd/sparse.sh@110 -- # file1=file_zero1 00:27:04.847 06:19:35 -- dd/sparse.sh@111 -- # file2=file_zero2 00:27:04.847 06:19:35 -- dd/sparse.sh@112 -- # file3=file_zero3 00:27:04.847 06:19:35 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:27:04.847 06:19:35 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:27:04.847 06:19:35 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:27:04.847 06:19:35 -- dd/sparse.sh@118 -- # prepare 00:27:04.847 06:19:35 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:27:04.847 06:19:35 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:27:04.847 1+0 records in 00:27:04.847 1+0 records out 00:27:04.847 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00918345 s, 457 MB/s 00:27:04.847 06:19:35 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:27:04.847 1+0 records in 00:27:04.847 1+0 records out 00:27:04.847 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0111546 s, 376 MB/s 00:27:04.847 06:19:35 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:27:04.847 1+0 records in 00:27:04.847 1+0 records out 00:27:04.847 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0135776 s, 309 MB/s 00:27:04.847 06:19:35 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:27:04.847 06:19:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:04.847 06:19:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:04.847 06:19:35 -- common/autotest_common.sh@10 -- # set +x 00:27:04.847 ************************************ 00:27:04.847 START TEST dd_sparse_file_to_file 00:27:04.847 ************************************ 00:27:04.847 06:19:35 -- common/autotest_common.sh@1104 -- # file_to_file 00:27:04.847 06:19:35 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:27:04.847 06:19:35 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:27:04.847 06:19:35 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:04.847 06:19:35 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:27:04.847 06:19:35 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:27:04.847 06:19:35 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:27:04.847 06:19:35 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:27:04.847 06:19:35 -- dd/sparse.sh@41 -- # gen_conf 00:27:04.847 06:19:35 -- dd/common.sh@31 -- # xtrace_disable 00:27:04.847 06:19:35 -- common/autotest_common.sh@10 -- # set +x 00:27:05.106 { 00:27:05.106 "subsystems": [ 00:27:05.106 { 00:27:05.106 "subsystem": "bdev", 00:27:05.106 "config": [ 00:27:05.106 { 00:27:05.106 "params": { 00:27:05.106 "block_size": 4096, 00:27:05.106 "filename": "dd_sparse_aio_disk", 00:27:05.106 "name": "dd_aio" 00:27:05.106 }, 00:27:05.106 "method": "bdev_aio_create" 00:27:05.106 }, 00:27:05.106 { 00:27:05.106 "params": { 00:27:05.106 "lvs_name": "dd_lvstore", 00:27:05.106 "bdev_name": "dd_aio" 00:27:05.106 }, 00:27:05.106 "method": "bdev_lvol_create_lvstore" 00:27:05.106 }, 00:27:05.106 { 00:27:05.106 "method": "bdev_wait_for_examine" 00:27:05.106 } 00:27:05.106 ] 00:27:05.106 } 00:27:05.106 ] 00:27:05.106 } 00:27:05.106 [2024-06-11 06:19:35.550420] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:05.106 [2024-06-11 06:19:35.550648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131630 ] 00:27:05.106 [2024-06-11 06:19:35.734983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.365 [2024-06-11 06:19:36.004801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.840  Copying: 12/36 [MB] (average 750 MBps) 00:27:07.840 00:27:07.840 06:19:38 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:27:07.840 06:19:38 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:27:07.840 06:19:38 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:27:07.840 06:19:38 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:27:07.840 06:19:38 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:07.840 06:19:38 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:27:07.840 06:19:38 -- dd/sparse.sh@52 -- # stat1_b=24576 00:27:07.840 06:19:38 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:27:07.840 06:19:38 -- dd/sparse.sh@53 -- # stat2_b=24576 00:27:07.840 06:19:38 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:07.840 00:27:07.840 real 0m2.639s 00:27:07.840 user 0m2.086s 00:27:07.840 sys 0m0.421s 00:27:07.840 06:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.840 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:27:07.840 ************************************ 00:27:07.840 END TEST dd_sparse_file_to_file 00:27:07.840 ************************************ 00:27:07.840 06:19:38 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:27:07.840 06:19:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:07.840 06:19:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:07.840 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:27:07.840 ************************************ 00:27:07.840 START TEST dd_sparse_file_to_bdev 00:27:07.840 ************************************ 00:27:07.840 06:19:38 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:27:07.840 06:19:38 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:07.840 06:19:38 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:27:07.840 06:19:38 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:27:07.840 06:19:38 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:27:07.840 06:19:38 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:27:07.840 06:19:38 -- dd/sparse.sh@73 -- # gen_conf 00:27:07.840 06:19:38 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.840 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:27:07.840 { 00:27:07.840 "subsystems": [ 00:27:07.840 { 00:27:07.840 "subsystem": "bdev", 00:27:07.840 "config": [ 00:27:07.840 { 00:27:07.840 "params": { 00:27:07.840 "block_size": 4096, 00:27:07.840 "filename": "dd_sparse_aio_disk", 00:27:07.840 "name": "dd_aio" 00:27:07.840 }, 00:27:07.840 "method": "bdev_aio_create" 00:27:07.840 }, 00:27:07.840 { 00:27:07.840 "params": { 00:27:07.840 "lvs_name": "dd_lvstore", 00:27:07.840 "lvol_name": "dd_lvol", 00:27:07.840 "size": 37748736, 00:27:07.840 "thin_provision": true 00:27:07.840 }, 00:27:07.840 "method": "bdev_lvol_create" 00:27:07.840 }, 00:27:07.840 { 00:27:07.840 "method": "bdev_wait_for_examine" 00:27:07.840 } 00:27:07.840 ] 00:27:07.840 } 00:27:07.840 ] 00:27:07.840 } 00:27:07.840 [2024-06-11 06:19:38.254922] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:07.840 [2024-06-11 06:19:38.255129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131702 ] 00:27:07.840 [2024-06-11 06:19:38.441458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.099 [2024-06-11 06:19:38.688693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.667 [2024-06-11 06:19:39.100740] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:27:08.667  Copying: 12/36 [MB] (average 521 MBps)[2024-06-11 06:19:39.170695] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:27:10.572 00:27:10.572 00:27:10.572 00:27:10.572 real 0m2.615s 00:27:10.572 user 0m2.106s 00:27:10.572 sys 0m0.391s 00:27:10.572 06:19:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.572 ************************************ 00:27:10.572 END TEST dd_sparse_file_to_bdev 00:27:10.572 ************************************ 00:27:10.572 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:27:10.572 06:19:40 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:27:10.572 06:19:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:10.572 06:19:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:10.572 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:27:10.572 ************************************ 00:27:10.572 START TEST dd_sparse_bdev_to_file 00:27:10.573 ************************************ 00:27:10.573 06:19:40 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:27:10.573 06:19:40 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:27:10.573 06:19:40 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:27:10.573 06:19:40 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:27:10.573 06:19:40 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:27:10.573 06:19:40 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:27:10.573 06:19:40 -- dd/sparse.sh@91 -- # gen_conf 00:27:10.573 06:19:40 -- dd/common.sh@31 -- # xtrace_disable 00:27:10.573 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:27:10.573 { 00:27:10.573 "subsystems": [ 00:27:10.573 { 00:27:10.573 "subsystem": "bdev", 00:27:10.573 "config": [ 00:27:10.573 { 00:27:10.573 "params": { 00:27:10.573 "block_size": 4096, 00:27:10.573 "filename": "dd_sparse_aio_disk", 00:27:10.573 "name": "dd_aio" 00:27:10.573 }, 00:27:10.573 "method": "bdev_aio_create" 00:27:10.573 }, 00:27:10.573 { 00:27:10.573 "method": "bdev_wait_for_examine" 00:27:10.573 } 00:27:10.573 ] 00:27:10.573 } 00:27:10.573 ] 00:27:10.573 } 00:27:10.573 [2024-06-11 06:19:40.924754] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:10.573 [2024-06-11 06:19:40.924985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131765 ] 00:27:10.573 [2024-06-11 06:19:41.109212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.832 [2024-06-11 06:19:41.349833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.814  Copying: 12/36 [MB] (average 923 MBps) 00:27:12.814 00:27:12.814 06:19:43 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:12.814 06:19:43 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:12.814 06:19:43 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:12.814 06:19:43 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:12.814 06:19:43 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:12.814 06:19:43 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:12.814 06:19:43 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:12.814 06:19:43 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:12.814 06:19:43 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:12.814 06:19:43 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:12.814 00:27:12.814 real 0m2.578s 00:27:12.814 user 0m2.109s 00:27:12.814 sys 0m0.353s 00:27:12.814 06:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.814 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:27:12.814 ************************************ 00:27:12.814 END TEST dd_sparse_bdev_to_file 00:27:12.814 ************************************ 00:27:13.074 06:19:43 -- dd/sparse.sh@1 -- # cleanup 00:27:13.074 06:19:43 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:13.074 06:19:43 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:13.074 06:19:43 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:13.074 06:19:43 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:13.074 00:27:13.074 real 0m8.221s 00:27:13.074 user 0m6.463s 00:27:13.074 sys 0m1.405s 00:27:13.074 06:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.074 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.074 ************************************ 00:27:13.074 END TEST spdk_dd_sparse 00:27:13.074 ************************************ 00:27:13.074 06:19:43 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:13.074 06:19:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.074 06:19:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.074 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.074 ************************************ 00:27:13.074 START TEST spdk_dd_negative 00:27:13.074 ************************************ 00:27:13.074 06:19:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:13.074 * Looking for test storage... 00:27:13.074 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:13.074 06:19:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:13.074 06:19:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.074 06:19:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.074 06:19:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.074 06:19:43 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.074 06:19:43 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.074 06:19:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.074 06:19:43 -- paths/export.sh@5 -- # export PATH 00:27:13.074 06:19:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:13.074 06:19:43 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:13.074 06:19:43 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:13.074 06:19:43 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:13.074 06:19:43 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:13.074 06:19:43 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:13.074 06:19:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.074 06:19:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.074 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.074 ************************************ 00:27:13.074 START TEST dd_invalid_arguments 00:27:13.074 ************************************ 00:27:13.074 06:19:43 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:27:13.074 06:19:43 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:13.074 06:19:43 -- common/autotest_common.sh@640 -- # local es=0 00:27:13.074 06:19:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:13.074 06:19:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.074 06:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.074 06:19:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.074 06:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.074 06:19:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.074 06:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.074 06:19:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.074 06:19:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.332 06:19:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:13.332 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:13.332 options: 00:27:13.332 -c, --config JSON config file (default none) 00:27:13.332 --json JSON config file (default none) 00:27:13.332 --json-ignore-init-errors 00:27:13.332 don't exit on invalid config entry 00:27:13.332 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:13.332 -g, --single-file-segments 00:27:13.332 force creating just one hugetlbfs file 00:27:13.332 -h, --help show this usage 00:27:13.332 -i, --shm-id shared memory ID (optional) 00:27:13.332 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:13.332 --lcores lcore to CPU mapping list. The list is in the format: 00:27:13.332 [<,lcores[@CPUs]>...] 00:27:13.332 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:13.332 Within the group, '-' is used for range separator, 00:27:13.332 ',' is used for single number separator. 00:27:13.332 '( )' can be omitted for single element group, 00:27:13.332 '@' can be omitted if cpus and lcores have the same value 00:27:13.332 -n, --mem-channels channel number of memory channels used for DPDK 00:27:13.333 -p, --main-core main (primary) core for DPDK 00:27:13.333 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:13.333 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:13.333 --disable-cpumask-locks Disable CPU core lock files. 00:27:13.333 --silence-noticelog disable notice level logging to stderr 00:27:13.333 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:13.333 -u, --no-pci disable PCI access 00:27:13.333 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:13.333 --max-delay maximum reactor delay (in microseconds) 00:27:13.333 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:13.333 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:13.333 -R, --huge-unlink unlink huge files after initialization 00:27:13.333 -v, --version print SPDK version 00:27:13.333 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:13.333 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:13.333 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:13.333 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:13.333 Tracepoints vary in size and can use more than one trace entry. 00:27:13.333 --rpcs-allowed comma-separated list of permitted RPCS 00:27:13.333 --env-context Opaque context for use of the env implementation 00:27:13.333 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:13.333 --no-huge run without using hugepages 00:27:13.333 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:13.333 -e, --tpoint-group [:] 00:27:13.333 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:13.333 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:13.333 Groups and masks can be /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:13.333 [2024-06-11 06:19:43.797789] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:13.333 combined (e.g. thread,bdev:0x1). 00:27:13.333 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:13.333 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:13.333 [--------- DD Options ---------] 00:27:13.333 --if Input file. Must specify either --if or --ib. 00:27:13.333 --ib Input bdev. Must specifier either --if or --ib 00:27:13.333 --of Output file. Must specify either --of or --ob. 00:27:13.333 --ob Output bdev. Must specify either --of or --ob. 00:27:13.333 --iflag Input file flags. 00:27:13.333 --oflag Output file flags. 00:27:13.333 --bs I/O unit size (default: 4096) 00:27:13.333 --qd Queue depth (default: 2) 00:27:13.333 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:13.333 --skip Skip this many I/O units at start of input. (default: 0) 00:27:13.333 --seek Skip this many I/O units at start of output. (default: 0) 00:27:13.333 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:13.333 --sparse Enable hole skipping in input target 00:27:13.333 Available iflag and oflag values: 00:27:13.333 append - append mode 00:27:13.333 direct - use direct I/O for data 00:27:13.333 directory - fail unless a directory 00:27:13.333 dsync - use synchronized I/O for data 00:27:13.333 noatime - do not update access time 00:27:13.333 noctty - do not assign controlling terminal from file 00:27:13.333 nofollow - do not follow symlinks 00:27:13.333 nonblock - use non-blocking I/O 00:27:13.333 sync - use synchronized I/O for data and metadata 00:27:13.333 06:19:43 -- common/autotest_common.sh@643 -- # es=2 00:27:13.333 06:19:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:13.333 06:19:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:13.333 06:19:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:13.333 00:27:13.333 real 0m0.148s 00:27:13.333 user 0m0.080s 00:27:13.333 sys 0m0.069s 00:27:13.333 ************************************ 00:27:13.333 END TEST dd_invalid_arguments 00:27:13.333 ************************************ 00:27:13.333 06:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.333 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.333 06:19:43 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:13.333 06:19:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.333 06:19:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.333 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.333 ************************************ 00:27:13.333 START TEST dd_double_input 00:27:13.333 ************************************ 00:27:13.333 06:19:43 -- common/autotest_common.sh@1104 -- # double_input 00:27:13.333 06:19:43 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:13.333 06:19:43 -- common/autotest_common.sh@640 -- # local es=0 00:27:13.333 06:19:43 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:13.333 06:19:43 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.333 06:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.333 06:19:43 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.333 06:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.333 06:19:43 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.333 06:19:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.333 06:19:43 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.333 06:19:43 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.333 06:19:43 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:13.592 [2024-06-11 06:19:43.999645] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:13.592 06:19:44 -- common/autotest_common.sh@643 -- # es=22 00:27:13.592 06:19:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:13.592 06:19:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:13.592 06:19:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:13.592 00:27:13.592 real 0m0.124s 00:27:13.592 user 0m0.054s 00:27:13.592 sys 0m0.070s 00:27:13.592 06:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.592 ************************************ 00:27:13.592 END TEST dd_double_input 00:27:13.592 ************************************ 00:27:13.592 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.592 06:19:44 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:13.592 06:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.592 06:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.592 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.592 ************************************ 00:27:13.592 START TEST dd_double_output 00:27:13.592 ************************************ 00:27:13.592 06:19:44 -- common/autotest_common.sh@1104 -- # double_output 00:27:13.592 06:19:44 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:13.592 06:19:44 -- common/autotest_common.sh@640 -- # local es=0 00:27:13.592 06:19:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:13.592 06:19:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.592 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.592 06:19:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.592 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.592 06:19:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.592 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.592 06:19:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.592 06:19:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.592 06:19:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:13.592 [2024-06-11 06:19:44.188401] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:13.851 06:19:44 -- common/autotest_common.sh@643 -- # es=22 00:27:13.851 06:19:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:13.851 06:19:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:13.851 06:19:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:13.851 00:27:13.851 real 0m0.130s 00:27:13.851 user 0m0.067s 00:27:13.851 sys 0m0.064s 00:27:13.851 ************************************ 00:27:13.851 END TEST dd_double_output 00:27:13.851 ************************************ 00:27:13.851 06:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.851 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.851 06:19:44 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:13.851 06:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:13.851 06:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:13.851 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:13.851 ************************************ 00:27:13.851 START TEST dd_no_input 00:27:13.851 ************************************ 00:27:13.851 06:19:44 -- common/autotest_common.sh@1104 -- # no_input 00:27:13.851 06:19:44 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:13.851 06:19:44 -- common/autotest_common.sh@640 -- # local es=0 00:27:13.851 06:19:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:13.851 06:19:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.851 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.851 06:19:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.851 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.851 06:19:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.851 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:13.851 06:19:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.851 06:19:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.851 06:19:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:13.851 [2024-06-11 06:19:44.403237] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:13.851 06:19:44 -- common/autotest_common.sh@643 -- # es=22 00:27:13.851 06:19:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:13.851 06:19:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:13.851 06:19:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:13.851 00:27:13.851 real 0m0.150s 00:27:13.851 user 0m0.072s 00:27:13.851 sys 0m0.079s 00:27:13.851 ************************************ 00:27:13.851 END TEST dd_no_input 00:27:13.851 ************************************ 00:27:13.851 06:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:13.851 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.110 06:19:44 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:14.110 06:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.110 06:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.110 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.110 ************************************ 00:27:14.110 START TEST dd_no_output 00:27:14.110 ************************************ 00:27:14.110 06:19:44 -- common/autotest_common.sh@1104 -- # no_output 00:27:14.110 06:19:44 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:14.110 06:19:44 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.110 06:19:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:14.110 06:19:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.110 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.110 06:19:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.110 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.110 06:19:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.110 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.110 06:19:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.110 06:19:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.110 06:19:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:14.110 [2024-06-11 06:19:44.625995] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:14.110 06:19:44 -- common/autotest_common.sh@643 -- # es=22 00:27:14.110 06:19:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:14.110 06:19:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:14.110 06:19:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:14.110 00:27:14.110 real 0m0.149s 00:27:14.110 user 0m0.068s 00:27:14.110 sys 0m0.082s 00:27:14.110 ************************************ 00:27:14.110 END TEST dd_no_output 00:27:14.110 ************************************ 00:27:14.110 06:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.110 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.110 06:19:44 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:14.110 06:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.110 06:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.110 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.110 ************************************ 00:27:14.110 START TEST dd_wrong_blocksize 00:27:14.110 ************************************ 00:27:14.110 06:19:44 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:27:14.110 06:19:44 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:14.110 06:19:44 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.110 06:19:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:14.110 06:19:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.370 06:19:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.370 06:19:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.370 06:19:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:14.370 [2024-06-11 06:19:44.840461] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:14.370 06:19:44 -- common/autotest_common.sh@643 -- # es=22 00:27:14.370 06:19:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:14.370 06:19:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:14.370 06:19:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:14.370 00:27:14.370 real 0m0.151s 00:27:14.370 user 0m0.074s 00:27:14.370 sys 0m0.078s 00:27:14.370 06:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:14.370 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.370 ************************************ 00:27:14.370 END TEST dd_wrong_blocksize 00:27:14.370 ************************************ 00:27:14.370 06:19:44 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:14.370 06:19:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:14.370 06:19:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:14.370 06:19:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.370 ************************************ 00:27:14.370 START TEST dd_smaller_blocksize 00:27:14.370 ************************************ 00:27:14.370 06:19:44 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:27:14.370 06:19:44 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:14.370 06:19:44 -- common/autotest_common.sh@640 -- # local es=0 00:27:14.370 06:19:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:14.370 06:19:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.370 06:19:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:14.370 06:19:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.370 06:19:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.370 06:19:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:14.630 [2024-06-11 06:19:45.063840] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:14.630 [2024-06-11 06:19:45.064080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132040 ] 00:27:14.630 [2024-06-11 06:19:45.253850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.199 [2024-06-11 06:19:45.584261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.768 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:16.027 [2024-06-11 06:19:46.467784] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:16.027 [2024-06-11 06:19:46.467899] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:16.981 [2024-06-11 06:19:47.378701] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:17.240 06:19:47 -- common/autotest_common.sh@643 -- # es=244 00:27:17.240 06:19:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:17.240 06:19:47 -- common/autotest_common.sh@652 -- # es=116 00:27:17.240 06:19:47 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:17.240 06:19:47 -- common/autotest_common.sh@660 -- # es=1 00:27:17.240 ************************************ 00:27:17.240 END TEST dd_smaller_blocksize 00:27:17.240 ************************************ 00:27:17.240 06:19:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:17.240 00:27:17.240 real 0m2.886s 00:27:17.240 user 0m2.024s 00:27:17.240 sys 0m0.762s 00:27:17.240 06:19:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.240 06:19:47 -- common/autotest_common.sh@10 -- # set +x 00:27:17.500 06:19:47 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:17.500 06:19:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.500 06:19:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.500 06:19:47 -- common/autotest_common.sh@10 -- # set +x 00:27:17.500 ************************************ 00:27:17.500 START TEST dd_invalid_count 00:27:17.500 ************************************ 00:27:17.500 06:19:47 -- common/autotest_common.sh@1104 -- # invalid_count 00:27:17.500 06:19:47 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:17.500 06:19:47 -- common/autotest_common.sh@640 -- # local es=0 00:27:17.500 06:19:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:17.500 06:19:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.500 06:19:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.500 06:19:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.500 06:19:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.500 06:19:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.500 06:19:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.500 06:19:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.500 06:19:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:17.500 06:19:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:17.500 [2024-06-11 06:19:48.011580] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:17.500 06:19:48 -- common/autotest_common.sh@643 -- # es=22 00:27:17.500 06:19:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:17.500 06:19:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:17.500 06:19:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:17.500 00:27:17.500 real 0m0.145s 00:27:17.500 user 0m0.083s 00:27:17.500 sys 0m0.060s 00:27:17.500 06:19:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.500 ************************************ 00:27:17.500 END TEST dd_invalid_count 00:27:17.500 ************************************ 00:27:17.500 06:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.500 06:19:48 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:17.500 06:19:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.500 06:19:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.500 06:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.500 ************************************ 00:27:17.500 START TEST dd_invalid_oflag 00:27:17.500 ************************************ 00:27:17.500 06:19:48 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:27:17.500 06:19:48 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:17.500 06:19:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:17.500 06:19:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:17.500 06:19:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.500 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.500 06:19:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.760 06:19:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.760 06:19:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:17.760 06:19:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:17.760 [2024-06-11 06:19:48.208314] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:17.760 06:19:48 -- common/autotest_common.sh@643 -- # es=22 00:27:17.760 06:19:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:17.760 06:19:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:17.760 06:19:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:17.760 ************************************ 00:27:17.760 END TEST dd_invalid_oflag 00:27:17.760 ************************************ 00:27:17.760 00:27:17.760 real 0m0.119s 00:27:17.760 user 0m0.059s 00:27:17.760 sys 0m0.061s 00:27:17.760 06:19:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.760 06:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.760 06:19:48 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:17.760 06:19:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:17.760 06:19:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:17.760 06:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:17.760 ************************************ 00:27:17.760 START TEST dd_invalid_iflag 00:27:17.760 ************************************ 00:27:17.760 06:19:48 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:27:17.760 06:19:48 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:17.760 06:19:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:17.760 06:19:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:17.760 06:19:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.760 06:19:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.760 06:19:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:17.760 06:19:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:17.760 06:19:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:17.760 06:19:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:17.760 [2024-06-11 06:19:48.393293] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:18.020 06:19:48 -- common/autotest_common.sh@643 -- # es=22 00:27:18.020 06:19:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:18.020 06:19:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:18.020 06:19:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:18.020 00:27:18.020 real 0m0.107s 00:27:18.020 user 0m0.046s 00:27:18.020 sys 0m0.061s 00:27:18.020 06:19:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:18.020 ************************************ 00:27:18.020 END TEST dd_invalid_iflag 00:27:18.020 ************************************ 00:27:18.020 06:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:18.020 06:19:48 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:18.020 06:19:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:18.020 06:19:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:18.020 06:19:48 -- common/autotest_common.sh@10 -- # set +x 00:27:18.020 ************************************ 00:27:18.020 START TEST dd_unknown_flag 00:27:18.020 ************************************ 00:27:18.020 06:19:48 -- common/autotest_common.sh@1104 -- # unknown_flag 00:27:18.020 06:19:48 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:18.020 06:19:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:18.020 06:19:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:18.020 06:19:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:18.020 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:18.020 06:19:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:18.020 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:18.020 06:19:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:18.020 06:19:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:18.020 06:19:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:18.020 06:19:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:18.020 06:19:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:18.020 [2024-06-11 06:19:48.567363] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:18.020 [2024-06-11 06:19:48.567538] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132187 ] 00:27:18.281 [2024-06-11 06:19:48.728158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.543 [2024-06-11 06:19:48.954637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.802 [2024-06-11 06:19:49.337269] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:18.802 [2024-06-11 06:19:49.337377] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:18.802 [2024-06-11 06:19:49.337423] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:18.802 [2024-06-11 06:19:49.337480] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:19.738 [2024-06-11 06:19:50.262919] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:20.306 06:19:50 -- common/autotest_common.sh@643 -- # es=236 00:27:20.306 06:19:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:20.306 06:19:50 -- common/autotest_common.sh@652 -- # es=108 00:27:20.306 06:19:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:20.306 06:19:50 -- common/autotest_common.sh@660 -- # es=1 00:27:20.306 06:19:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:20.306 00:27:20.306 real 0m2.292s 00:27:20.306 user 0m1.870s 00:27:20.306 sys 0m0.323s 00:27:20.307 06:19:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.307 ************************************ 00:27:20.307 END TEST dd_unknown_flag 00:27:20.307 ************************************ 00:27:20.307 06:19:50 -- common/autotest_common.sh@10 -- # set +x 00:27:20.307 06:19:50 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:20.307 06:19:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:20.307 06:19:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:20.307 06:19:50 -- common/autotest_common.sh@10 -- # set +x 00:27:20.307 ************************************ 00:27:20.307 START TEST dd_invalid_json 00:27:20.307 ************************************ 00:27:20.307 06:19:50 -- common/autotest_common.sh@1104 -- # invalid_json 00:27:20.307 06:19:50 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:20.307 06:19:50 -- common/autotest_common.sh@640 -- # local es=0 00:27:20.307 06:19:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:20.307 06:19:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:20.307 06:19:50 -- dd/negative_dd.sh@95 -- # : 00:27:20.307 06:19:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.307 06:19:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:20.307 06:19:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.307 06:19:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:20.307 06:19:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:20.307 06:19:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:20.307 06:19:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:20.307 06:19:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:20.307 [2024-06-11 06:19:50.945791] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:20.307 [2024-06-11 06:19:50.946021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132240 ] 00:27:20.566 [2024-06-11 06:19:51.128133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.825 [2024-06-11 06:19:51.366227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.825 [2024-06-11 06:19:51.366461] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:20.825 [2024-06-11 06:19:51.366504] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:20.825 [2024-06-11 06:19:51.366584] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:21.393 06:19:51 -- common/autotest_common.sh@643 -- # es=234 00:27:21.393 06:19:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:21.393 06:19:51 -- common/autotest_common.sh@652 -- # es=106 00:27:21.393 06:19:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:21.393 06:19:51 -- common/autotest_common.sh@660 -- # es=1 00:27:21.393 06:19:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:21.393 00:27:21.393 real 0m1.003s 00:27:21.393 user 0m0.703s 00:27:21.393 sys 0m0.203s 00:27:21.393 06:19:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.393 ************************************ 00:27:21.393 END TEST dd_invalid_json 00:27:21.393 ************************************ 00:27:21.393 06:19:51 -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 00:27:21.393 real 0m8.340s 00:27:21.393 user 0m5.629s 00:27:21.393 sys 0m2.419s 00:27:21.393 06:19:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.393 ************************************ 00:27:21.393 END TEST spdk_dd_negative 00:27:21.393 ************************************ 00:27:21.393 06:19:51 -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 00:27:21.393 real 3m15.882s 00:27:21.393 user 2m34.819s 00:27:21.393 sys 0m30.990s 00:27:21.393 06:19:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.393 06:19:51 -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 ************************************ 00:27:21.393 END TEST spdk_dd 00:27:21.393 ************************************ 00:27:21.393 06:19:52 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:27:21.393 06:19:52 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:21.393 06:19:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:21.393 06:19:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:21.393 06:19:52 -- common/autotest_common.sh@10 -- # set +x 00:27:21.393 ************************************ 00:27:21.393 START TEST blockdev_nvme 00:27:21.393 ************************************ 00:27:21.393 06:19:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:21.680 * Looking for test storage... 00:27:21.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:21.680 06:19:52 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:21.680 06:19:52 -- bdev/nbd_common.sh@6 -- # set -e 00:27:21.680 06:19:52 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:21.680 06:19:52 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:21.680 06:19:52 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:21.680 06:19:52 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:21.680 06:19:52 -- bdev/blockdev.sh@18 -- # : 00:27:21.680 06:19:52 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:21.680 06:19:52 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:21.680 06:19:52 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:21.680 06:19:52 -- bdev/blockdev.sh@672 -- # uname -s 00:27:21.680 06:19:52 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:21.680 06:19:52 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:21.680 06:19:52 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:21.680 06:19:52 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:21.680 06:19:52 -- bdev/blockdev.sh@682 -- # dek= 00:27:21.680 06:19:52 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:21.680 06:19:52 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:21.680 06:19:52 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:21.680 06:19:52 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:21.680 06:19:52 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:21.680 06:19:52 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:21.680 06:19:52 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=132334 00:27:21.680 06:19:52 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:21.680 06:19:52 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:21.680 06:19:52 -- bdev/blockdev.sh@47 -- # waitforlisten 132334 00:27:21.680 06:19:52 -- common/autotest_common.sh@819 -- # '[' -z 132334 ']' 00:27:21.680 06:19:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.680 06:19:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:21.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.681 06:19:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.681 06:19:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:21.681 06:19:52 -- common/autotest_common.sh@10 -- # set +x 00:27:21.681 [2024-06-11 06:19:52.243689] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:21.681 [2024-06-11 06:19:52.244549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132334 ] 00:27:21.943 [2024-06-11 06:19:52.428244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.201 [2024-06-11 06:19:52.661884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:22.201 [2024-06-11 06:19:52.662143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.578 06:19:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:23.578 06:19:53 -- common/autotest_common.sh@852 -- # return 0 00:27:23.578 06:19:53 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:23.578 06:19:53 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:23.578 06:19:53 -- bdev/blockdev.sh@79 -- # local json 00:27:23.578 06:19:53 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:23.578 06:19:53 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:23.578 06:19:54 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:23.578 06:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.578 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:27:23.578 06:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.578 06:19:54 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:23.578 06:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.578 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:27:23.578 06:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.578 06:19:54 -- bdev/blockdev.sh@738 -- # cat 00:27:23.578 06:19:54 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:23.578 06:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.578 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:27:23.578 06:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.578 06:19:54 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:23.578 06:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.578 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:27:23.578 06:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.578 06:19:54 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:23.578 06:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.579 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:27:23.579 06:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.579 06:19:54 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:23.579 06:19:54 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:23.579 06:19:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:23.579 06:19:54 -- common/autotest_common.sh@10 -- # set +x 00:27:23.579 06:19:54 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:23.579 06:19:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:23.579 06:19:54 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:23.579 06:19:54 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "eb72e953-206d-4c04-8a30-6a85f86b4ca0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eb72e953-206d-4c04-8a30-6a85f86b4ca0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:23.579 06:19:54 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:23.837 06:19:54 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:23.837 06:19:54 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:23.838 06:19:54 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:23.838 06:19:54 -- bdev/blockdev.sh@752 -- # killprocess 132334 00:27:23.838 06:19:54 -- common/autotest_common.sh@926 -- # '[' -z 132334 ']' 00:27:23.838 06:19:54 -- common/autotest_common.sh@930 -- # kill -0 132334 00:27:23.838 06:19:54 -- common/autotest_common.sh@931 -- # uname 00:27:23.838 06:19:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:23.838 06:19:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132334 00:27:23.838 06:19:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:23.838 killing process with pid 132334 00:27:23.838 06:19:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:23.838 06:19:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132334' 00:27:23.838 06:19:54 -- common/autotest_common.sh@945 -- # kill 132334 00:27:23.838 06:19:54 -- common/autotest_common.sh@950 -- # wait 132334 00:27:26.373 06:19:56 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:26.373 06:19:56 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:26.373 06:19:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:27:26.373 06:19:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:26.373 06:19:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.373 ************************************ 00:27:26.373 START TEST bdev_hello_world 00:27:26.373 ************************************ 00:27:26.373 06:19:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:26.373 [2024-06-11 06:19:56.903118] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:26.373 [2024-06-11 06:19:56.903332] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132435 ] 00:27:26.632 [2024-06-11 06:19:57.088554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.891 [2024-06-11 06:19:57.368903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.459 [2024-06-11 06:19:57.908567] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:27.459 [2024-06-11 06:19:57.908639] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:27:27.459 [2024-06-11 06:19:57.908683] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:27.459 [2024-06-11 06:19:57.911947] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:27.459 [2024-06-11 06:19:57.912593] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:27.459 [2024-06-11 06:19:57.912635] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:27.459 [2024-06-11 06:19:57.912887] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:27.459 00:27:27.459 [2024-06-11 06:19:57.912927] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:29.363 ************************************ 00:27:29.363 END TEST bdev_hello_world 00:27:29.363 ************************************ 00:27:29.363 00:27:29.363 real 0m2.677s 00:27:29.363 user 0m2.254s 00:27:29.363 sys 0m0.325s 00:27:29.363 06:19:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.363 06:19:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.363 06:19:59 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:29.363 06:19:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:29.363 06:19:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.363 06:19:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.363 ************************************ 00:27:29.363 START TEST bdev_bounds 00:27:29.363 ************************************ 00:27:29.363 06:19:59 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:27:29.363 06:19:59 -- bdev/blockdev.sh@288 -- # bdevio_pid=132485 00:27:29.363 06:19:59 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:29.363 06:19:59 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 132485' 00:27:29.363 Process bdevio pid: 132485 00:27:29.363 06:19:59 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:29.363 06:19:59 -- bdev/blockdev.sh@291 -- # waitforlisten 132485 00:27:29.363 06:19:59 -- common/autotest_common.sh@819 -- # '[' -z 132485 ']' 00:27:29.363 06:19:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.363 06:19:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.363 06:19:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.363 06:19:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.363 06:19:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.363 [2024-06-11 06:19:59.650912] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:29.363 [2024-06-11 06:19:59.652130] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132485 ] 00:27:29.363 [2024-06-11 06:19:59.848419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:29.622 [2024-06-11 06:20:00.109336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.622 [2024-06-11 06:20:00.109512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.622 [2024-06-11 06:20:00.109514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:30.999 06:20:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.999 06:20:01 -- common/autotest_common.sh@852 -- # return 0 00:27:30.999 06:20:01 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:30.999 I/O targets: 00:27:30.999 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:30.999 00:27:30.999 00:27:30.999 CUnit - A unit testing framework for C - Version 2.1-3 00:27:30.999 http://cunit.sourceforge.net/ 00:27:30.999 00:27:30.999 00:27:30.999 Suite: bdevio tests on: Nvme0n1 00:27:30.999 Test: blockdev write read block ...passed 00:27:30.999 Test: blockdev write zeroes read block ...passed 00:27:30.999 Test: blockdev write zeroes read no split ...passed 00:27:30.999 Test: blockdev write zeroes read split ...passed 00:27:30.999 Test: blockdev write zeroes read split partial ...passed 00:27:30.999 Test: blockdev reset ...[2024-06-11 06:20:01.390627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:30.999 [2024-06-11 06:20:01.394731] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:30.999 passed 00:27:30.999 Test: blockdev write read 8 blocks ...passed 00:27:30.999 Test: blockdev write read size > 128k ...passed 00:27:30.999 Test: blockdev write read invalid size ...passed 00:27:30.999 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:30.999 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:30.999 Test: blockdev write read max offset ...passed 00:27:30.999 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:30.999 Test: blockdev writev readv 8 blocks ...passed 00:27:30.999 Test: blockdev writev readv 30 x 1block ...passed 00:27:30.999 Test: blockdev writev readv block ...passed 00:27:30.999 Test: blockdev writev readv size > 128k ...passed 00:27:30.999 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:31.000 Test: blockdev comparev and writev ...[2024-06-11 06:20:01.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x3100d000 len:0x1000 00:27:31.000 [2024-06-11 06:20:01.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:31.000 passed 00:27:31.000 Test: blockdev nvme passthru rw ...passed 00:27:31.000 Test: blockdev nvme passthru vendor specific ...[2024-06-11 06:20:01.406178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:27:31.000 [2024-06-11 06:20:01.406341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:27:31.000 passed 00:27:31.000 Test: blockdev nvme admin passthru ...passed 00:27:31.000 Test: blockdev copy ...passed 00:27:31.000 00:27:31.000 Run Summary: Type Total Ran Passed Failed Inactive 00:27:31.000 suites 1 1 n/a 0 0 00:27:31.000 tests 23 23 23 0 0 00:27:31.000 asserts 152 152 152 0 n/a 00:27:31.000 00:27:31.000 Elapsed time = 0.239 seconds 00:27:31.000 0 00:27:31.000 06:20:01 -- bdev/blockdev.sh@293 -- # killprocess 132485 00:27:31.000 06:20:01 -- common/autotest_common.sh@926 -- # '[' -z 132485 ']' 00:27:31.000 06:20:01 -- common/autotest_common.sh@930 -- # kill -0 132485 00:27:31.000 06:20:01 -- common/autotest_common.sh@931 -- # uname 00:27:31.000 06:20:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:31.000 06:20:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132485 00:27:31.000 06:20:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:31.000 06:20:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:31.000 06:20:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132485' 00:27:31.000 killing process with pid 132485 00:27:31.000 06:20:01 -- common/autotest_common.sh@945 -- # kill 132485 00:27:31.000 06:20:01 -- common/autotest_common.sh@950 -- # wait 132485 00:27:32.390 06:20:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:32.390 00:27:32.390 real 0m3.402s 00:27:32.390 user 0m8.185s 00:27:32.390 sys 0m0.527s 00:27:32.391 06:20:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.391 06:20:02 -- common/autotest_common.sh@10 -- # set +x 00:27:32.391 ************************************ 00:27:32.391 END TEST bdev_bounds 00:27:32.391 ************************************ 00:27:32.391 06:20:03 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:32.391 06:20:03 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:27:32.391 06:20:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:32.391 06:20:03 -- common/autotest_common.sh@10 -- # set +x 00:27:32.650 ************************************ 00:27:32.650 START TEST bdev_nbd 00:27:32.650 ************************************ 00:27:32.650 06:20:03 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:32.650 06:20:03 -- bdev/blockdev.sh@298 -- # uname -s 00:27:32.650 06:20:03 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:32.650 06:20:03 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:32.650 06:20:03 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:32.650 06:20:03 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:27:32.650 06:20:03 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:32.650 06:20:03 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:32.650 06:20:03 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:32.650 06:20:03 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:32.650 06:20:03 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:32.650 06:20:03 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:32.650 06:20:03 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:32.650 06:20:03 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:32.650 06:20:03 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:27:32.650 06:20:03 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:32.650 06:20:03 -- bdev/blockdev.sh@316 -- # nbd_pid=132568 00:27:32.650 06:20:03 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:32.650 06:20:03 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:32.650 06:20:03 -- bdev/blockdev.sh@318 -- # waitforlisten 132568 /var/tmp/spdk-nbd.sock 00:27:32.650 06:20:03 -- common/autotest_common.sh@819 -- # '[' -z 132568 ']' 00:27:32.650 06:20:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:32.650 06:20:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:32.650 06:20:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:32.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:32.650 06:20:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:32.650 06:20:03 -- common/autotest_common.sh@10 -- # set +x 00:27:32.650 [2024-06-11 06:20:03.141032] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:32.650 [2024-06-11 06:20:03.141442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.909 [2024-06-11 06:20:03.325269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.167 [2024-06-11 06:20:03.584989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.104 06:20:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:34.104 06:20:04 -- common/autotest_common.sh@852 -- # return 0 00:27:34.104 06:20:04 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@24 -- # local i 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:34.104 06:20:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:27:34.363 06:20:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:34.363 06:20:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:34.363 06:20:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:34.363 06:20:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:34.363 06:20:04 -- common/autotest_common.sh@857 -- # local i 00:27:34.363 06:20:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:34.363 06:20:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:34.363 06:20:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:34.363 06:20:04 -- common/autotest_common.sh@861 -- # break 00:27:34.363 06:20:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:34.363 06:20:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:34.363 06:20:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:34.363 1+0 records in 00:27:34.363 1+0 records out 00:27:34.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143227 s, 2.9 MB/s 00:27:34.363 06:20:04 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:34.363 06:20:04 -- common/autotest_common.sh@874 -- # size=4096 00:27:34.363 06:20:04 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:34.363 06:20:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:34.363 06:20:05 -- common/autotest_common.sh@877 -- # return 0 00:27:34.363 06:20:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:34.363 06:20:05 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:34.363 06:20:05 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:34.932 { 00:27:34.932 "nbd_device": "/dev/nbd0", 00:27:34.932 "bdev_name": "Nvme0n1" 00:27:34.932 } 00:27:34.932 ]' 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:34.932 { 00:27:34.932 "nbd_device": "/dev/nbd0", 00:27:34.932 "bdev_name": "Nvme0n1" 00:27:34.932 } 00:27:34.932 ]' 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@51 -- # local i 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@41 -- # break 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@45 -- # return 0 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:34.932 06:20:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@65 -- # true 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@65 -- # count=0 00:27:35.191 06:20:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@122 -- # count=0 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@127 -- # return 0 00:27:35.192 06:20:05 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@12 -- # local i 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.192 06:20:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:27:35.450 /dev/nbd0 00:27:35.450 06:20:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:35.450 06:20:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:35.450 06:20:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:27:35.450 06:20:06 -- common/autotest_common.sh@857 -- # local i 00:27:35.450 06:20:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:27:35.450 06:20:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:27:35.450 06:20:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:27:35.450 06:20:06 -- common/autotest_common.sh@861 -- # break 00:27:35.450 06:20:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:27:35.451 06:20:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:27:35.451 06:20:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:35.451 1+0 records in 00:27:35.451 1+0 records out 00:27:35.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364861 s, 11.2 MB/s 00:27:35.451 06:20:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.451 06:20:06 -- common/autotest_common.sh@874 -- # size=4096 00:27:35.451 06:20:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:35.451 06:20:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:27:35.451 06:20:06 -- common/autotest_common.sh@877 -- # return 0 00:27:35.451 06:20:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:35.451 06:20:06 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.451 06:20:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:35.451 06:20:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.451 06:20:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:35.710 { 00:27:35.710 "nbd_device": "/dev/nbd0", 00:27:35.710 "bdev_name": "Nvme0n1" 00:27:35.710 } 00:27:35.710 ]' 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:35.710 { 00:27:35.710 "nbd_device": "/dev/nbd0", 00:27:35.710 "bdev_name": "Nvme0n1" 00:27:35.710 } 00:27:35.710 ]' 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@65 -- # count=1 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@95 -- # count=1 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:35.710 256+0 records in 00:27:35.710 256+0 records out 00:27:35.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110012 s, 95.3 MB/s 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:35.710 06:20:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:35.969 256+0 records in 00:27:35.969 256+0 records out 00:27:35.969 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0701652 s, 14.9 MB/s 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@51 -- # local i 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:35.969 06:20:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:36.228 06:20:06 -- bdev/nbd_common.sh@41 -- # break 00:27:36.228 06:20:06 -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.228 06:20:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:36.228 06:20:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.228 06:20:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@65 -- # true 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@65 -- # count=0 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@104 -- # count=0 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@109 -- # return 0 00:27:36.488 06:20:06 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:36.488 06:20:06 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:36.747 malloc_lvol_verify 00:27:36.747 06:20:07 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:36.747 030c26f9-54f2-40dd-8a8a-5f9a26f6f60e 00:27:36.747 06:20:07 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:37.005 64f18954-586a-480b-9267-bdbd07d6fc0f 00:27:37.005 06:20:07 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:37.264 /dev/nbd0 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:37.264 mke2fs 1.46.5 (30-Dec-2021) 00:27:37.264 00:27:37.264 Filesystem too small for a journal 00:27:37.264 Discarding device blocks: 0/1024 done 00:27:37.264 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:37.264 00:27:37.264 Allocating group tables: 0/1 done 00:27:37.264 Writing inode tables: 0/1 done 00:27:37.264 Writing superblocks and filesystem accounting information: 0/1 done 00:27:37.264 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@51 -- # local i 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:37.264 06:20:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:37.523 06:20:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:37.523 06:20:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:37.523 06:20:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:37.523 06:20:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:37.523 06:20:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:37.524 06:20:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:37.524 06:20:08 -- bdev/nbd_common.sh@41 -- # break 00:27:37.524 06:20:08 -- bdev/nbd_common.sh@45 -- # return 0 00:27:37.524 06:20:08 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:37.524 06:20:08 -- bdev/nbd_common.sh@147 -- # return 0 00:27:37.524 06:20:08 -- bdev/blockdev.sh@324 -- # killprocess 132568 00:27:37.524 06:20:08 -- common/autotest_common.sh@926 -- # '[' -z 132568 ']' 00:27:37.524 06:20:08 -- common/autotest_common.sh@930 -- # kill -0 132568 00:27:37.524 06:20:08 -- common/autotest_common.sh@931 -- # uname 00:27:37.524 06:20:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:37.524 06:20:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132568 00:27:37.524 06:20:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:37.524 06:20:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:37.524 06:20:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132568' 00:27:37.524 killing process with pid 132568 00:27:37.524 06:20:08 -- common/autotest_common.sh@945 -- # kill 132568 00:27:37.524 06:20:08 -- common/autotest_common.sh@950 -- # wait 132568 00:27:38.899 06:20:09 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:38.899 00:27:38.899 real 0m6.493s 00:27:38.899 user 0m8.584s 00:27:38.899 sys 0m1.555s 00:27:38.899 06:20:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.899 06:20:09 -- common/autotest_common.sh@10 -- # set +x 00:27:38.899 ************************************ 00:27:38.899 END TEST bdev_nbd 00:27:38.899 ************************************ 00:27:39.157 06:20:09 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:39.157 06:20:09 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:27:39.157 skipping fio tests on NVMe due to multi-ns failures. 00:27:39.158 06:20:09 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:39.158 06:20:09 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:39.158 06:20:09 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:39.158 06:20:09 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:39.158 06:20:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:39.158 06:20:09 -- common/autotest_common.sh@10 -- # set +x 00:27:39.158 ************************************ 00:27:39.158 START TEST bdev_verify 00:27:39.158 ************************************ 00:27:39.158 06:20:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:39.158 [2024-06-11 06:20:09.695466] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:39.158 [2024-06-11 06:20:09.695705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132771 ] 00:27:39.415 [2024-06-11 06:20:09.883924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:39.673 [2024-06-11 06:20:10.144887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.673 [2024-06-11 06:20:10.144888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.240 Running I/O for 5 seconds... 00:27:45.511 00:27:45.511 Latency(us) 00:27:45.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.511 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:45.511 Verification LBA range: start 0x0 length 0xa0000 00:27:45.511 Nvme0n1 : 5.01 18553.58 72.47 0.00 0.00 6870.65 321.83 18225.25 00:27:45.511 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:45.511 Verification LBA range: start 0xa0000 length 0xa0000 00:27:45.511 Nvme0n1 : 5.01 12096.59 47.25 0.00 0.00 10541.55 317.93 16852.11 00:27:45.511 =================================================================================================================== 00:27:45.511 Total : 30650.18 119.73 0.00 0.00 8319.76 317.93 18225.25 00:27:53.660 00:27:53.660 real 0m14.234s 00:27:53.660 user 0m26.873s 00:27:53.660 sys 0m0.526s 00:27:53.660 06:20:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:53.660 ************************************ 00:27:53.660 END TEST bdev_verify 00:27:53.660 ************************************ 00:27:53.660 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.660 06:20:23 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:53.660 06:20:23 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:27:53.660 06:20:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:53.660 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.660 ************************************ 00:27:53.660 START TEST bdev_verify_big_io 00:27:53.660 ************************************ 00:27:53.660 06:20:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:53.660 [2024-06-11 06:20:24.000711] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:27:53.660 [2024-06-11 06:20:24.000931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132892 ] 00:27:53.660 [2024-06-11 06:20:24.187961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:53.920 [2024-06-11 06:20:24.364347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.920 [2024-06-11 06:20:24.364409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.178 Running I/O for 5 seconds... 00:27:59.452 00:27:59.452 Latency(us) 00:27:59.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.452 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:59.452 Verification LBA range: start 0x0 length 0xa000 00:27:59.452 Nvme0n1 : 5.07 1188.70 74.29 0.00 0.00 105642.15 624.15 165774.87 00:27:59.452 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:59.452 Verification LBA range: start 0xa000 length 0xa000 00:27:59.452 Nvme0n1 : 5.07 1171.68 73.23 0.00 0.00 107134.05 1068.86 188743.68 00:27:59.452 =================================================================================================================== 00:27:59.452 Total : 2360.39 147.52 0.00 0.00 106382.55 624.15 188743.68 00:28:00.830 00:28:00.830 real 0m7.213s 00:28:00.830 user 0m13.195s 00:28:00.830 sys 0m0.270s 00:28:00.830 06:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.830 06:20:31 -- common/autotest_common.sh@10 -- # set +x 00:28:00.830 ************************************ 00:28:00.830 END TEST bdev_verify_big_io 00:28:00.830 ************************************ 00:28:00.830 06:20:31 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:00.830 06:20:31 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:00.830 06:20:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:00.830 06:20:31 -- common/autotest_common.sh@10 -- # set +x 00:28:00.830 ************************************ 00:28:00.830 START TEST bdev_write_zeroes 00:28:00.830 ************************************ 00:28:00.830 06:20:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:00.830 [2024-06-11 06:20:31.278936] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:00.830 [2024-06-11 06:20:31.279124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132994 ] 00:28:00.830 [2024-06-11 06:20:31.461120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.090 [2024-06-11 06:20:31.640197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.667 Running I/O for 1 seconds... 00:28:02.601 00:28:02.601 Latency(us) 00:28:02.601 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.601 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:02.601 Nvme0n1 : 1.00 72414.29 282.87 0.00 0.00 1763.38 546.13 15603.81 00:28:02.601 =================================================================================================================== 00:28:02.601 Total : 72414.29 282.87 0.00 0.00 1763.38 546.13 15603.81 00:28:03.976 00:28:03.976 real 0m3.035s 00:28:03.976 user 0m2.687s 00:28:03.976 sys 0m0.249s 00:28:03.976 ************************************ 00:28:03.976 END TEST bdev_write_zeroes 00:28:03.976 06:20:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.976 06:20:34 -- common/autotest_common.sh@10 -- # set +x 00:28:03.976 ************************************ 00:28:03.976 06:20:34 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:03.976 06:20:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:03.976 06:20:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:03.976 06:20:34 -- common/autotest_common.sh@10 -- # set +x 00:28:03.976 ************************************ 00:28:03.976 START TEST bdev_json_nonenclosed 00:28:03.976 ************************************ 00:28:03.976 06:20:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:03.976 [2024-06-11 06:20:34.365025] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:03.977 [2024-06-11 06:20:34.365624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133056 ] 00:28:03.977 [2024-06-11 06:20:34.528958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.236 [2024-06-11 06:20:34.697155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.236 [2024-06-11 06:20:34.697570] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:04.236 [2024-06-11 06:20:34.697704] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:04.496 00:28:04.496 real 0m0.803s 00:28:04.496 user 0m0.566s 00:28:04.496 sys 0m0.136s 00:28:04.496 06:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.496 06:20:35 -- common/autotest_common.sh@10 -- # set +x 00:28:04.496 ************************************ 00:28:04.496 END TEST bdev_json_nonenclosed 00:28:04.496 ************************************ 00:28:04.755 06:20:35 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:04.755 06:20:35 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:04.755 06:20:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:04.755 06:20:35 -- common/autotest_common.sh@10 -- # set +x 00:28:04.755 ************************************ 00:28:04.755 START TEST bdev_json_nonarray 00:28:04.755 ************************************ 00:28:04.755 06:20:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:04.755 [2024-06-11 06:20:35.232426] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:04.755 [2024-06-11 06:20:35.232582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133097 ] 00:28:04.755 [2024-06-11 06:20:35.393514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.014 [2024-06-11 06:20:35.567774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.014 [2024-06-11 06:20:35.568173] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:05.014 [2024-06-11 06:20:35.568308] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:05.583 00:28:05.583 real 0m0.799s 00:28:05.583 user 0m0.548s 00:28:05.583 sys 0m0.151s 00:28:05.583 06:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.583 ************************************ 00:28:05.583 END TEST bdev_json_nonarray 00:28:05.583 ************************************ 00:28:05.583 06:20:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.583 06:20:36 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:05.583 06:20:36 -- bdev/blockdev.sh@809 -- # cleanup 00:28:05.583 06:20:36 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:05.583 06:20:36 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:05.583 06:20:36 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:28:05.583 00:28:05.583 real 0m44.001s 00:28:05.583 user 1m7.915s 00:28:05.583 sys 0m4.987s 00:28:05.583 06:20:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.583 ************************************ 00:28:05.583 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:28:05.583 END TEST blockdev_nvme 00:28:05.583 ************************************ 00:28:05.583 06:20:36 -- spdk/autotest.sh@219 -- # uname -s 00:28:05.583 06:20:36 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:28:05.583 06:20:36 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:05.583 06:20:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:05.583 06:20:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:05.583 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:28:05.583 ************************************ 00:28:05.583 START TEST blockdev_nvme_gpt 00:28:05.583 ************************************ 00:28:05.583 06:20:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:28:05.583 * Looking for test storage... 00:28:05.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:05.583 06:20:36 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:05.583 06:20:36 -- bdev/nbd_common.sh@6 -- # set -e 00:28:05.583 06:20:36 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:05.583 06:20:36 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:05.583 06:20:36 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:05.583 06:20:36 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:05.583 06:20:36 -- bdev/blockdev.sh@18 -- # : 00:28:05.583 06:20:36 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:05.583 06:20:36 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:05.583 06:20:36 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:05.583 06:20:36 -- bdev/blockdev.sh@672 -- # uname -s 00:28:05.583 06:20:36 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:05.583 06:20:36 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:05.583 06:20:36 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:28:05.583 06:20:36 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:05.583 06:20:36 -- bdev/blockdev.sh@682 -- # dek= 00:28:05.583 06:20:36 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:05.583 06:20:36 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:05.583 06:20:36 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:05.583 06:20:36 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:28:05.583 06:20:36 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:05.583 06:20:36 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=133174 00:28:05.583 06:20:36 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:05.583 06:20:36 -- bdev/blockdev.sh@47 -- # waitforlisten 133174 00:28:05.583 06:20:36 -- common/autotest_common.sh@819 -- # '[' -z 133174 ']' 00:28:05.583 06:20:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.583 06:20:36 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:05.583 06:20:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:05.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.583 06:20:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.583 06:20:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:05.583 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 [2024-06-11 06:20:36.320949] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:05.842 [2024-06-11 06:20:36.321161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133174 ] 00:28:06.101 [2024-06-11 06:20:36.503136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.101 [2024-06-11 06:20:36.666298] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:06.101 [2024-06-11 06:20:36.666651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.492 06:20:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:07.492 06:20:37 -- common/autotest_common.sh@852 -- # return 0 00:28:07.492 06:20:37 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:07.492 06:20:37 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:28:07.492 06:20:37 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:07.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:07.751 Waiting for block devices as requested 00:28:07.751 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:08.010 06:20:38 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:28:08.010 06:20:38 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:28:08.010 06:20:38 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:28:08.010 06:20:38 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:28:08.010 06:20:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:28:08.010 06:20:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:28:08.010 06:20:38 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:28:08.010 06:20:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:08.010 06:20:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:28:08.010 06:20:38 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:28:08.010 06:20:38 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:28:08.010 06:20:38 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:28:08.010 06:20:38 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:28:08.010 06:20:38 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:28:08.010 06:20:38 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:28:08.010 06:20:38 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:28:08.010 06:20:38 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:28:08.010 BYT; 00:28:08.010 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:28:08.010 06:20:38 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:28:08.010 BYT; 00:28:08.010 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:28:08.010 06:20:38 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:28:08.010 06:20:38 -- bdev/blockdev.sh@114 -- # break 00:28:08.010 06:20:38 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:28:08.010 06:20:38 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:28:08.010 06:20:38 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:08.010 06:20:38 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:28:08.270 06:20:38 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:28:08.270 06:20:38 -- scripts/common.sh@410 -- # local spdk_guid 00:28:08.270 06:20:38 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:08.270 06:20:38 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:08.270 06:20:38 -- scripts/common.sh@415 -- # IFS='()' 00:28:08.270 06:20:38 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:28:08.270 06:20:38 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:08.270 06:20:38 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:28:08.270 06:20:38 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:08.270 06:20:38 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:08.270 06:20:38 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:28:08.270 06:20:38 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:28:08.270 06:20:38 -- scripts/common.sh@422 -- # local spdk_guid 00:28:08.270 06:20:38 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:28:08.270 06:20:38 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:08.270 06:20:38 -- scripts/common.sh@427 -- # IFS='()' 00:28:08.270 06:20:38 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:28:08.270 06:20:38 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:28:08.270 06:20:38 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:28:08.270 06:20:38 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:08.270 06:20:38 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:08.270 06:20:38 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:28:08.270 06:20:38 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:28:09.648 The operation has completed successfully. 00:28:09.648 06:20:39 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:28:10.584 The operation has completed successfully. 00:28:10.584 06:20:40 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:10.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:11.154 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:12.554 06:20:42 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:28:12.554 06:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.554 [] 00:28:12.554 06:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.554 06:20:42 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:28:12.554 06:20:42 -- bdev/blockdev.sh@79 -- # local json 00:28:12.554 06:20:42 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:28:12.554 06:20:42 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:12.554 06:20:43 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:28:12.554 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.554 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.554 06:20:43 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:12.554 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.554 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.554 06:20:43 -- bdev/blockdev.sh@738 -- # cat 00:28:12.554 06:20:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:12.554 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.554 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.554 06:20:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:12.554 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.554 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.554 06:20:43 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:12.554 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.554 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.554 06:20:43 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:12.554 06:20:43 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:12.554 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.554 06:20:43 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:12.554 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:28:12.814 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.814 06:20:43 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:12.814 06:20:43 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:12.814 06:20:43 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:28:12.814 06:20:43 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:12.814 06:20:43 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:28:12.814 06:20:43 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:12.814 06:20:43 -- bdev/blockdev.sh@752 -- # killprocess 133174 00:28:12.814 06:20:43 -- common/autotest_common.sh@926 -- # '[' -z 133174 ']' 00:28:12.814 06:20:43 -- common/autotest_common.sh@930 -- # kill -0 133174 00:28:12.814 06:20:43 -- common/autotest_common.sh@931 -- # uname 00:28:12.814 06:20:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:12.814 06:20:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133174 00:28:12.814 06:20:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:12.814 killing process with pid 133174 00:28:12.814 06:20:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:12.814 06:20:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133174' 00:28:12.814 06:20:43 -- common/autotest_common.sh@945 -- # kill 133174 00:28:12.814 06:20:43 -- common/autotest_common.sh@950 -- # wait 133174 00:28:15.351 06:20:45 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:15.351 06:20:45 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:15.351 06:20:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:15.351 06:20:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:15.351 06:20:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.351 ************************************ 00:28:15.351 START TEST bdev_hello_world 00:28:15.351 ************************************ 00:28:15.351 06:20:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:28:15.351 [2024-06-11 06:20:45.587596] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:15.351 [2024-06-11 06:20:45.587831] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133625 ] 00:28:15.351 [2024-06-11 06:20:45.774228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.351 [2024-06-11 06:20:45.973911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.920 [2024-06-11 06:20:46.441025] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:15.920 [2024-06-11 06:20:46.441092] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:28:15.920 [2024-06-11 06:20:46.441123] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:15.920 [2024-06-11 06:20:46.443941] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:15.920 [2024-06-11 06:20:46.444542] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:15.920 [2024-06-11 06:20:46.444583] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:15.920 [2024-06-11 06:20:46.444852] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:15.920 00:28:15.920 [2024-06-11 06:20:46.444879] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:17.299 00:28:17.299 real 0m2.327s 00:28:17.299 user 0m1.995s 00:28:17.299 sys 0m0.233s 00:28:17.299 06:20:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.299 ************************************ 00:28:17.299 END TEST bdev_hello_world 00:28:17.299 ************************************ 00:28:17.299 06:20:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.299 06:20:47 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:17.299 06:20:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:17.299 06:20:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:17.299 06:20:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.299 ************************************ 00:28:17.299 START TEST bdev_bounds 00:28:17.299 ************************************ 00:28:17.299 06:20:47 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:28:17.299 06:20:47 -- bdev/blockdev.sh@288 -- # bdevio_pid=133676 00:28:17.299 Process bdevio pid: 133676 00:28:17.299 06:20:47 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:17.299 06:20:47 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 133676' 00:28:17.299 06:20:47 -- bdev/blockdev.sh@291 -- # waitforlisten 133676 00:28:17.299 06:20:47 -- common/autotest_common.sh@819 -- # '[' -z 133676 ']' 00:28:17.299 06:20:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.299 06:20:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:17.299 06:20:47 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:17.299 06:20:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.299 06:20:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:17.300 06:20:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.560 [2024-06-11 06:20:47.988012] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:17.560 [2024-06-11 06:20:47.988230] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133676 ] 00:28:17.560 [2024-06-11 06:20:48.179949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:17.819 [2024-06-11 06:20:48.362780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.819 [2024-06-11 06:20:48.362957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.819 [2024-06-11 06:20:48.362984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.199 06:20:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:19.199 06:20:49 -- common/autotest_common.sh@852 -- # return 0 00:28:19.199 06:20:49 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:28:19.199 I/O targets: 00:28:19.199 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:28:19.199 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:28:19.199 00:28:19.199 00:28:19.199 CUnit - A unit testing framework for C - Version 2.1-3 00:28:19.199 http://cunit.sourceforge.net/ 00:28:19.199 00:28:19.199 00:28:19.199 Suite: bdevio tests on: Nvme0n1p2 00:28:19.199 Test: blockdev write read block ...passed 00:28:19.199 Test: blockdev write zeroes read block ...passed 00:28:19.199 Test: blockdev write zeroes read no split ...passed 00:28:19.199 Test: blockdev write zeroes read split ...passed 00:28:19.199 Test: blockdev write zeroes read split partial ...passed 00:28:19.199 Test: blockdev reset ...[2024-06-11 06:20:49.708217] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:19.199 [2024-06-11 06:20:49.712051] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.199 passed 00:28:19.199 Test: blockdev write read 8 blocks ...passed 00:28:19.199 Test: blockdev write read size > 128k ...passed 00:28:19.199 Test: blockdev write read invalid size ...passed 00:28:19.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:19.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:19.199 Test: blockdev write read max offset ...passed 00:28:19.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:19.199 Test: blockdev writev readv 8 blocks ...passed 00:28:19.199 Test: blockdev writev readv 30 x 1block ...passed 00:28:19.199 Test: blockdev writev readv block ...passed 00:28:19.199 Test: blockdev writev readv size > 128k ...passed 00:28:19.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:19.199 Test: blockdev comparev and writev ...[2024-06-11 06:20:49.722611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2760b000 len:0x1000 00:28:19.199 [2024-06-11 06:20:49.722712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:19.199 passed 00:28:19.199 Test: blockdev nvme passthru rw ...passed 00:28:19.199 Test: blockdev nvme passthru vendor specific ...passed 00:28:19.199 Test: blockdev nvme admin passthru ...passed 00:28:19.199 Test: blockdev copy ...passed 00:28:19.199 Suite: bdevio tests on: Nvme0n1p1 00:28:19.199 Test: blockdev write read block ...passed 00:28:19.199 Test: blockdev write zeroes read block ...passed 00:28:19.199 Test: blockdev write zeroes read no split ...passed 00:28:19.199 Test: blockdev write zeroes read split ...passed 00:28:19.199 Test: blockdev write zeroes read split partial ...passed 00:28:19.199 Test: blockdev reset ...[2024-06-11 06:20:49.793487] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:19.199 [2024-06-11 06:20:49.797397] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:19.199 passed 00:28:19.199 Test: blockdev write read 8 blocks ...passed 00:28:19.199 Test: blockdev write read size > 128k ...passed 00:28:19.199 Test: blockdev write read invalid size ...passed 00:28:19.199 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:28:19.199 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:28:19.199 Test: blockdev write read max offset ...passed 00:28:19.199 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:28:19.199 Test: blockdev writev readv 8 blocks ...passed 00:28:19.199 Test: blockdev writev readv 30 x 1block ...passed 00:28:19.199 Test: blockdev writev readv block ...passed 00:28:19.199 Test: blockdev writev readv size > 128k ...passed 00:28:19.199 Test: blockdev writev readv size > 128k in two iovs ...passed 00:28:19.199 Test: blockdev comparev and writev ...[2024-06-11 06:20:49.806343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2760d000 len:0x1000 00:28:19.199 [2024-06-11 06:20:49.806420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:28:19.199 passed 00:28:19.199 Test: blockdev nvme passthru rw ...passed 00:28:19.199 Test: blockdev nvme passthru vendor specific ...passed 00:28:19.199 Test: blockdev nvme admin passthru ...passed 00:28:19.199 Test: blockdev copy ...passed 00:28:19.199 00:28:19.199 Run Summary: Type Total Ran Passed Failed Inactive 00:28:19.199 suites 2 2 n/a 0 0 00:28:19.199 tests 46 46 46 0 0 00:28:19.199 asserts 284 284 284 0 n/a 00:28:19.199 00:28:19.199 Elapsed time = 0.477 seconds 00:28:19.199 0 00:28:19.199 06:20:49 -- bdev/blockdev.sh@293 -- # killprocess 133676 00:28:19.199 06:20:49 -- common/autotest_common.sh@926 -- # '[' -z 133676 ']' 00:28:19.199 06:20:49 -- common/autotest_common.sh@930 -- # kill -0 133676 00:28:19.199 06:20:49 -- common/autotest_common.sh@931 -- # uname 00:28:19.199 06:20:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:19.199 06:20:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133676 00:28:19.459 06:20:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:19.459 killing process with pid 133676 00:28:19.459 06:20:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:19.459 06:20:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133676' 00:28:19.459 06:20:49 -- common/autotest_common.sh@945 -- # kill 133676 00:28:19.459 06:20:49 -- common/autotest_common.sh@950 -- # wait 133676 00:28:20.837 06:20:51 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:28:20.837 00:28:20.837 real 0m3.239s 00:28:20.837 user 0m8.075s 00:28:20.837 sys 0m0.467s 00:28:20.837 ************************************ 00:28:20.837 END TEST bdev_bounds 00:28:20.837 06:20:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.837 06:20:51 -- common/autotest_common.sh@10 -- # set +x 00:28:20.837 ************************************ 00:28:20.837 06:20:51 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:20.837 06:20:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:28:20.837 06:20:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:20.837 06:20:51 -- common/autotest_common.sh@10 -- # set +x 00:28:20.837 ************************************ 00:28:20.837 START TEST bdev_nbd 00:28:20.837 ************************************ 00:28:20.837 06:20:51 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:28:20.837 06:20:51 -- bdev/blockdev.sh@298 -- # uname -s 00:28:20.837 06:20:51 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:28:20.837 06:20:51 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:20.837 06:20:51 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:20.837 06:20:51 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:28:20.837 06:20:51 -- bdev/blockdev.sh@302 -- # local bdev_all 00:28:20.837 06:20:51 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:28:20.837 06:20:51 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:28:20.837 06:20:51 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:28:20.837 06:20:51 -- bdev/blockdev.sh@309 -- # local nbd_all 00:28:20.837 06:20:51 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:28:20.837 06:20:51 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:20.837 06:20:51 -- bdev/blockdev.sh@312 -- # local nbd_list 00:28:20.837 06:20:51 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:20.837 06:20:51 -- bdev/blockdev.sh@313 -- # local bdev_list 00:28:20.837 06:20:51 -- bdev/blockdev.sh@316 -- # nbd_pid=133746 00:28:20.837 06:20:51 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:28:20.837 06:20:51 -- bdev/blockdev.sh@318 -- # waitforlisten 133746 /var/tmp/spdk-nbd.sock 00:28:20.837 06:20:51 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:20.837 06:20:51 -- common/autotest_common.sh@819 -- # '[' -z 133746 ']' 00:28:20.837 06:20:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:28:20.837 06:20:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:20.837 06:20:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:28:20.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:28:20.837 06:20:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:20.837 06:20:51 -- common/autotest_common.sh@10 -- # set +x 00:28:20.837 [2024-06-11 06:20:51.279736] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:20.837 [2024-06-11 06:20:51.280074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.837 [2024-06-11 06:20:51.437081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.097 [2024-06-11 06:20:51.618059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.666 06:20:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:21.666 06:20:52 -- common/autotest_common.sh@852 -- # return 0 00:28:21.666 06:20:52 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@24 -- # local i 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:21.666 06:20:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:28:21.925 06:20:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:28:21.925 06:20:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:28:21.925 06:20:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:28:21.925 06:20:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:21.925 06:20:52 -- common/autotest_common.sh@857 -- # local i 00:28:21.925 06:20:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:21.925 06:20:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:21.925 06:20:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:21.925 06:20:52 -- common/autotest_common.sh@861 -- # break 00:28:21.925 06:20:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:21.925 06:20:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:21.925 06:20:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:21.925 1+0 records in 00:28:21.925 1+0 records out 00:28:21.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00130528 s, 3.1 MB/s 00:28:21.925 06:20:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.925 06:20:52 -- common/autotest_common.sh@874 -- # size=4096 00:28:21.925 06:20:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:21.925 06:20:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:21.925 06:20:52 -- common/autotest_common.sh@877 -- # return 0 00:28:21.925 06:20:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:21.925 06:20:52 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:21.925 06:20:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:28:22.185 06:20:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:28:22.185 06:20:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:28:22.185 06:20:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:28:22.185 06:20:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:22.185 06:20:52 -- common/autotest_common.sh@857 -- # local i 00:28:22.185 06:20:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:22.185 06:20:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:22.185 06:20:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:22.185 06:20:52 -- common/autotest_common.sh@861 -- # break 00:28:22.185 06:20:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:22.185 06:20:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:22.185 06:20:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:22.185 1+0 records in 00:28:22.185 1+0 records out 00:28:22.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107071 s, 3.8 MB/s 00:28:22.185 06:20:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:22.185 06:20:52 -- common/autotest_common.sh@874 -- # size=4096 00:28:22.185 06:20:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:22.185 06:20:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:22.185 06:20:52 -- common/autotest_common.sh@877 -- # return 0 00:28:22.185 06:20:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:28:22.185 06:20:52 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:28:22.185 06:20:52 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:22.444 06:20:52 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:28:22.444 { 00:28:22.444 "nbd_device": "/dev/nbd0", 00:28:22.444 "bdev_name": "Nvme0n1p1" 00:28:22.444 }, 00:28:22.444 { 00:28:22.444 "nbd_device": "/dev/nbd1", 00:28:22.444 "bdev_name": "Nvme0n1p2" 00:28:22.444 } 00:28:22.444 ]' 00:28:22.444 06:20:52 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:28:22.444 06:20:52 -- bdev/nbd_common.sh@119 -- # echo '[ 00:28:22.444 { 00:28:22.444 "nbd_device": "/dev/nbd0", 00:28:22.444 "bdev_name": "Nvme0n1p1" 00:28:22.444 }, 00:28:22.444 { 00:28:22.444 "nbd_device": "/dev/nbd1", 00:28:22.444 "bdev_name": "Nvme0n1p2" 00:28:22.444 } 00:28:22.444 ]' 00:28:22.444 06:20:52 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@51 -- # local i 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:22.444 06:20:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@41 -- # break 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:22.704 06:20:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@41 -- # break 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@45 -- # return 0 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:22.964 06:20:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@65 -- # true 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@65 -- # count=0 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@122 -- # count=0 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@127 -- # return 0 00:28:23.222 06:20:53 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@12 -- # local i 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:23.222 06:20:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:28:23.481 /dev/nbd0 00:28:23.481 06:20:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:23.481 06:20:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:23.481 06:20:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:28:23.481 06:20:54 -- common/autotest_common.sh@857 -- # local i 00:28:23.481 06:20:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:23.481 06:20:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:23.481 06:20:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:28:23.481 06:20:54 -- common/autotest_common.sh@861 -- # break 00:28:23.481 06:20:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:23.481 06:20:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:23.481 06:20:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:23.481 1+0 records in 00:28:23.481 1+0 records out 00:28:23.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604502 s, 6.8 MB/s 00:28:23.481 06:20:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:23.481 06:20:54 -- common/autotest_common.sh@874 -- # size=4096 00:28:23.481 06:20:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:23.481 06:20:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:23.481 06:20:54 -- common/autotest_common.sh@877 -- # return 0 00:28:23.481 06:20:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:23.481 06:20:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:23.481 06:20:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:28:23.739 /dev/nbd1 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:23.739 06:20:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:28:23.739 06:20:54 -- common/autotest_common.sh@857 -- # local i 00:28:23.739 06:20:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:28:23.739 06:20:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:28:23.739 06:20:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:28:23.739 06:20:54 -- common/autotest_common.sh@861 -- # break 00:28:23.739 06:20:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:28:23.739 06:20:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:28:23.739 06:20:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:23.739 1+0 records in 00:28:23.739 1+0 records out 00:28:23.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528632 s, 7.7 MB/s 00:28:23.739 06:20:54 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:23.739 06:20:54 -- common/autotest_common.sh@874 -- # size=4096 00:28:23.739 06:20:54 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:23.739 06:20:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:28:23.739 06:20:54 -- common/autotest_common.sh@877 -- # return 0 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:23.739 06:20:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:28:23.998 { 00:28:23.998 "nbd_device": "/dev/nbd0", 00:28:23.998 "bdev_name": "Nvme0n1p1" 00:28:23.998 }, 00:28:23.998 { 00:28:23.998 "nbd_device": "/dev/nbd1", 00:28:23.998 "bdev_name": "Nvme0n1p2" 00:28:23.998 } 00:28:23.998 ]' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@64 -- # echo '[ 00:28:23.998 { 00:28:23.998 "nbd_device": "/dev/nbd0", 00:28:23.998 "bdev_name": "Nvme0n1p1" 00:28:23.998 }, 00:28:23.998 { 00:28:23.998 "nbd_device": "/dev/nbd1", 00:28:23.998 "bdev_name": "Nvme0n1p2" 00:28:23.998 } 00:28:23.998 ]' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:28:23.998 /dev/nbd1' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:28:23.998 /dev/nbd1' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@65 -- # count=2 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@66 -- # echo 2 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@95 -- # count=2 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@71 -- # local operation=write 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:28:23.998 256+0 records in 00:28:23.998 256+0 records out 00:28:23.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00820072 s, 128 MB/s 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:28:23.998 256+0 records in 00:28:23.998 256+0 records out 00:28:23.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0770652 s, 13.6 MB/s 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:28:23.998 06:20:54 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:28:24.257 256+0 records in 00:28:24.257 256+0 records out 00:28:24.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0751281 s, 14.0 MB/s 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@51 -- # local i 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.257 06:20:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@41 -- # break 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@45 -- # return 0 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.516 06:20:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@41 -- # break 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@45 -- # return 0 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:24.516 06:20:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@65 -- # true 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@65 -- # count=0 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@104 -- # count=0 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@109 -- # return 0 00:28:25.084 06:20:55 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:28:25.084 malloc_lvol_verify 00:28:25.084 06:20:55 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:28:25.343 4415206f-fe4e-4b81-9b43-c948b290e699 00:28:25.343 06:20:55 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:28:25.602 e7a9968d-9452-48b5-919b-bc324e958917 00:28:25.602 06:20:56 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:28:25.861 /dev/nbd0 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:28:25.861 mke2fs 1.46.5 (30-Dec-2021) 00:28:25.861 00:28:25.861 Filesystem too small for a journal 00:28:25.861 Discarding device blocks: 0/1024 done 00:28:25.861 Creating filesystem with 1024 4k blocks and 1024 inodes 00:28:25.861 00:28:25.861 Allocating group tables: 0/1 done 00:28:25.861 Writing inode tables: 0/1 done 00:28:25.861 Writing superblocks and filesystem accounting information: 0/1 done 00:28:25.861 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@51 -- # local i 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:25.861 06:20:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@41 -- # break 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@45 -- # return 0 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:28:26.120 06:20:56 -- bdev/nbd_common.sh@147 -- # return 0 00:28:26.120 06:20:56 -- bdev/blockdev.sh@324 -- # killprocess 133746 00:28:26.120 06:20:56 -- common/autotest_common.sh@926 -- # '[' -z 133746 ']' 00:28:26.120 06:20:56 -- common/autotest_common.sh@930 -- # kill -0 133746 00:28:26.120 06:20:56 -- common/autotest_common.sh@931 -- # uname 00:28:26.120 06:20:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:26.120 06:20:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133746 00:28:26.120 06:20:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:26.120 06:20:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:26.120 06:20:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133746' 00:28:26.120 killing process with pid 133746 00:28:26.120 06:20:56 -- common/autotest_common.sh@945 -- # kill 133746 00:28:26.120 06:20:56 -- common/autotest_common.sh@950 -- # wait 133746 00:28:27.541 06:20:57 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:28:27.541 00:28:27.541 real 0m6.607s 00:28:27.541 user 0m8.943s 00:28:27.541 sys 0m1.955s 00:28:27.541 06:20:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.541 06:20:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 ************************************ 00:28:27.541 END TEST bdev_nbd 00:28:27.541 ************************************ 00:28:27.541 06:20:57 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:28:27.541 06:20:57 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:28:27.541 06:20:57 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:28:27.541 skipping fio tests on NVMe due to multi-ns failures. 00:28:27.541 06:20:57 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:28:27.541 06:20:57 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:27.541 06:20:57 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:27.541 06:20:57 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:27.541 06:20:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.541 06:20:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.541 ************************************ 00:28:27.541 START TEST bdev_verify 00:28:27.541 ************************************ 00:28:27.541 06:20:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:28:27.541 [2024-06-11 06:20:57.979573] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:27.541 [2024-06-11 06:20:57.979795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134006 ] 00:28:27.541 [2024-06-11 06:20:58.168006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:27.800 [2024-06-11 06:20:58.334919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.800 [2024-06-11 06:20:58.334924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.368 Running I/O for 5 seconds... 00:28:33.657 00:28:33.657 Latency(us) 00:28:33.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.657 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:33.657 Verification LBA range: start 0x0 length 0x4ff80 00:28:33.657 Nvme0n1p1 : 5.02 6419.14 25.07 0.00 0.00 19892.05 2200.14 27088.21 00:28:33.657 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:33.657 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:33.657 Nvme0n1p1 : 5.02 4908.30 19.17 0.00 0.00 26008.04 2621.44 32705.58 00:28:33.657 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:33.657 Verification LBA range: start 0x0 length 0x4ff7f 00:28:33.657 Nvme0n1p2 : 5.02 6417.30 25.07 0.00 0.00 19881.43 2215.74 26838.55 00:28:33.657 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:33.657 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:33.657 Nvme0n1p2 : 5.03 4905.30 19.16 0.00 0.00 25986.99 3417.23 27712.37 00:28:33.657 =================================================================================================================== 00:28:33.657 Total : 22650.03 88.48 0.00 0.00 22534.79 2200.14 32705.58 00:28:36.192 00:28:36.192 real 0m8.784s 00:28:36.192 user 0m16.339s 00:28:36.192 sys 0m0.285s 00:28:36.192 06:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.192 06:21:06 -- common/autotest_common.sh@10 -- # set +x 00:28:36.192 ************************************ 00:28:36.192 END TEST bdev_verify 00:28:36.192 ************************************ 00:28:36.192 06:21:06 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:36.192 06:21:06 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:28:36.192 06:21:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:36.192 06:21:06 -- common/autotest_common.sh@10 -- # set +x 00:28:36.192 ************************************ 00:28:36.192 START TEST bdev_verify_big_io 00:28:36.192 ************************************ 00:28:36.192 06:21:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:36.192 [2024-06-11 06:21:06.834489] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:36.192 [2024-06-11 06:21:06.834889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134121 ] 00:28:36.452 [2024-06-11 06:21:07.020237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:36.710 [2024-06-11 06:21:07.196195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.710 [2024-06-11 06:21:07.196202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.332 Running I/O for 5 seconds... 00:28:42.624 00:28:42.624 Latency(us) 00:28:42.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.624 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.624 Verification LBA range: start 0x0 length 0x4ff8 00:28:42.624 Nvme0n1p1 : 5.17 670.65 41.92 0.00 0.00 188127.27 2933.52 319566.02 00:28:42.624 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.624 Verification LBA range: start 0x4ff8 length 0x4ff8 00:28:42.624 Nvme0n1p1 : 5.16 594.36 37.15 0.00 0.00 210753.85 35701.52 335544.32 00:28:42.624 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:42.624 Verification LBA range: start 0x0 length 0x4ff7 00:28:42.624 Nvme0n1p2 : 5.17 670.36 41.90 0.00 0.00 184717.15 3900.95 232684.01 00:28:42.624 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:42.624 Verification LBA range: start 0x4ff7 length 0x4ff7 00:28:42.624 Nvme0n1p2 : 5.17 610.40 38.15 0.00 0.00 202294.03 850.41 245666.38 00:28:42.624 =================================================================================================================== 00:28:42.624 Total : 2545.77 159.11 0.00 0.00 195902.27 850.41 335544.32 00:28:44.001 00:28:44.001 real 0m7.590s 00:28:44.001 user 0m13.963s 00:28:44.001 sys 0m0.257s 00:28:44.001 06:21:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.001 06:21:14 -- common/autotest_common.sh@10 -- # set +x 00:28:44.001 ************************************ 00:28:44.001 END TEST bdev_verify_big_io 00:28:44.001 ************************************ 00:28:44.001 06:21:14 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:44.001 06:21:14 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:44.001 06:21:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:44.001 06:21:14 -- common/autotest_common.sh@10 -- # set +x 00:28:44.001 ************************************ 00:28:44.001 START TEST bdev_write_zeroes 00:28:44.001 ************************************ 00:28:44.001 06:21:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:44.001 [2024-06-11 06:21:14.466628] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:44.001 [2024-06-11 06:21:14.466755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134223 ] 00:28:44.001 [2024-06-11 06:21:14.621152] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.260 [2024-06-11 06:21:14.797664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.826 Running I/O for 1 seconds... 00:28:45.759 00:28:45.759 Latency(us) 00:28:45.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.759 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:45.759 Nvme0n1p1 : 1.00 31358.57 122.49 0.00 0.00 4074.15 2309.36 13856.18 00:28:45.759 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:45.759 Nvme0n1p2 : 1.01 31358.24 122.49 0.00 0.00 4070.09 1950.48 12170.97 00:28:45.759 =================================================================================================================== 00:28:45.759 Total : 62716.81 244.99 0.00 0.00 4072.12 1950.48 13856.18 00:28:47.139 00:28:47.139 real 0m2.995s 00:28:47.139 user 0m2.679s 00:28:47.139 sys 0m0.217s 00:28:47.139 06:21:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.139 06:21:17 -- common/autotest_common.sh@10 -- # set +x 00:28:47.139 ************************************ 00:28:47.139 END TEST bdev_write_zeroes 00:28:47.139 ************************************ 00:28:47.139 06:21:17 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.139 06:21:17 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:47.139 06:21:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.139 06:21:17 -- common/autotest_common.sh@10 -- # set +x 00:28:47.139 ************************************ 00:28:47.139 START TEST bdev_json_nonenclosed 00:28:47.139 ************************************ 00:28:47.139 06:21:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.139 [2024-06-11 06:21:17.559862] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:47.139 [2024-06-11 06:21:17.560084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134285 ] 00:28:47.139 [2024-06-11 06:21:17.746149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.398 [2024-06-11 06:21:17.926961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.398 [2024-06-11 06:21:17.927128] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:47.398 [2024-06-11 06:21:17.927160] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:47.966 00:28:47.966 real 0m0.860s 00:28:47.966 user 0m0.579s 00:28:47.966 sys 0m0.181s 00:28:47.966 06:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.966 06:21:18 -- common/autotest_common.sh@10 -- # set +x 00:28:47.966 ************************************ 00:28:47.966 END TEST bdev_json_nonenclosed 00:28:47.966 ************************************ 00:28:47.966 06:21:18 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.966 06:21:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:28:47.966 06:21:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.966 06:21:18 -- common/autotest_common.sh@10 -- # set +x 00:28:47.966 ************************************ 00:28:47.966 START TEST bdev_json_nonarray 00:28:47.966 ************************************ 00:28:47.966 06:21:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:47.966 [2024-06-11 06:21:18.457625] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:47.966 [2024-06-11 06:21:18.457753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134324 ] 00:28:48.225 [2024-06-11 06:21:18.613008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.225 [2024-06-11 06:21:18.787586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.225 [2024-06-11 06:21:18.787760] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:48.225 [2024-06-11 06:21:18.787798] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:48.794 00:28:48.794 real 0m0.794s 00:28:48.794 user 0m0.578s 00:28:48.794 sys 0m0.116s 00:28:48.794 06:21:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.794 06:21:19 -- common/autotest_common.sh@10 -- # set +x 00:28:48.794 ************************************ 00:28:48.794 END TEST bdev_json_nonarray 00:28:48.794 ************************************ 00:28:48.794 06:21:19 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:28:48.794 06:21:19 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:28:48.794 06:21:19 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:28:48.794 06:21:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:48.794 06:21:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:48.794 06:21:19 -- common/autotest_common.sh@10 -- # set +x 00:28:48.794 ************************************ 00:28:48.794 START TEST bdev_gpt_uuid 00:28:48.794 ************************************ 00:28:48.794 06:21:19 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:28:48.794 06:21:19 -- bdev/blockdev.sh@612 -- # local bdev 00:28:48.794 06:21:19 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:28:48.794 06:21:19 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=134356 00:28:48.794 06:21:19 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:48.794 06:21:19 -- bdev/blockdev.sh@47 -- # waitforlisten 134356 00:28:48.794 06:21:19 -- common/autotest_common.sh@819 -- # '[' -z 134356 ']' 00:28:48.794 06:21:19 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:48.794 06:21:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.794 06:21:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:48.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.794 06:21:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.794 06:21:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:48.794 06:21:19 -- common/autotest_common.sh@10 -- # set +x 00:28:48.794 [2024-06-11 06:21:19.340675] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:48.794 [2024-06-11 06:21:19.340836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134356 ] 00:28:49.054 [2024-06-11 06:21:19.493868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.054 [2024-06-11 06:21:19.668463] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:49.054 [2024-06-11 06:21:19.668640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.433 06:21:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:50.433 06:21:20 -- common/autotest_common.sh@852 -- # return 0 00:28:50.433 06:21:20 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:50.433 06:21:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.433 06:21:20 -- common/autotest_common.sh@10 -- # set +x 00:28:50.433 Some configs were skipped because the RPC state that can call them passed over. 00:28:50.433 06:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.433 06:21:21 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:28:50.433 06:21:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.433 06:21:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.433 06:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.433 06:21:21 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:28:50.433 06:21:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.433 06:21:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.433 06:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.433 06:21:21 -- bdev/blockdev.sh@619 -- # bdev='[ 00:28:50.433 { 00:28:50.433 "name": "Nvme0n1p1", 00:28:50.433 "aliases": [ 00:28:50.433 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:28:50.433 ], 00:28:50.433 "product_name": "GPT Disk", 00:28:50.433 "block_size": 4096, 00:28:50.433 "num_blocks": 655104, 00:28:50.433 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:50.433 "assigned_rate_limits": { 00:28:50.433 "rw_ios_per_sec": 0, 00:28:50.433 "rw_mbytes_per_sec": 0, 00:28:50.433 "r_mbytes_per_sec": 0, 00:28:50.433 "w_mbytes_per_sec": 0 00:28:50.433 }, 00:28:50.433 "claimed": false, 00:28:50.433 "zoned": false, 00:28:50.433 "supported_io_types": { 00:28:50.433 "read": true, 00:28:50.433 "write": true, 00:28:50.433 "unmap": true, 00:28:50.433 "write_zeroes": true, 00:28:50.433 "flush": true, 00:28:50.433 "reset": true, 00:28:50.433 "compare": true, 00:28:50.433 "compare_and_write": false, 00:28:50.433 "abort": true, 00:28:50.433 "nvme_admin": false, 00:28:50.433 "nvme_io": false 00:28:50.433 }, 00:28:50.433 "driver_specific": { 00:28:50.433 "gpt": { 00:28:50.433 "base_bdev": "Nvme0n1", 00:28:50.433 "offset_blocks": 256, 00:28:50.433 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:28:50.433 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:50.433 "partition_name": "SPDK_TEST_first" 00:28:50.433 } 00:28:50.433 } 00:28:50.433 } 00:28:50.433 ]' 00:28:50.433 06:21:21 -- bdev/blockdev.sh@620 -- # jq -r length 00:28:50.693 06:21:21 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:28:50.693 06:21:21 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:50.693 06:21:21 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:50.693 06:21:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.693 06:21:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.693 06:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@624 -- # bdev='[ 00:28:50.693 { 00:28:50.693 "name": "Nvme0n1p2", 00:28:50.693 "aliases": [ 00:28:50.693 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:28:50.693 ], 00:28:50.693 "product_name": "GPT Disk", 00:28:50.693 "block_size": 4096, 00:28:50.693 "num_blocks": 655103, 00:28:50.693 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:50.693 "assigned_rate_limits": { 00:28:50.693 "rw_ios_per_sec": 0, 00:28:50.693 "rw_mbytes_per_sec": 0, 00:28:50.693 "r_mbytes_per_sec": 0, 00:28:50.693 "w_mbytes_per_sec": 0 00:28:50.693 }, 00:28:50.693 "claimed": false, 00:28:50.693 "zoned": false, 00:28:50.693 "supported_io_types": { 00:28:50.693 "read": true, 00:28:50.693 "write": true, 00:28:50.693 "unmap": true, 00:28:50.693 "write_zeroes": true, 00:28:50.693 "flush": true, 00:28:50.693 "reset": true, 00:28:50.693 "compare": true, 00:28:50.693 "compare_and_write": false, 00:28:50.693 "abort": true, 00:28:50.693 "nvme_admin": false, 00:28:50.693 "nvme_io": false 00:28:50.693 }, 00:28:50.693 "driver_specific": { 00:28:50.693 "gpt": { 00:28:50.693 "base_bdev": "Nvme0n1", 00:28:50.693 "offset_blocks": 655360, 00:28:50.693 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:28:50.693 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:50.693 "partition_name": "SPDK_TEST_second" 00:28:50.693 } 00:28:50.693 } 00:28:50.693 } 00:28:50.693 ]' 00:28:50.693 06:21:21 -- bdev/blockdev.sh@625 -- # jq -r length 00:28:50.693 06:21:21 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:28:50.693 06:21:21 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:50.693 06:21:21 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:50.693 06:21:21 -- bdev/blockdev.sh@629 -- # killprocess 134356 00:28:50.693 06:21:21 -- common/autotest_common.sh@926 -- # '[' -z 134356 ']' 00:28:50.693 06:21:21 -- common/autotest_common.sh@930 -- # kill -0 134356 00:28:50.693 06:21:21 -- common/autotest_common.sh@931 -- # uname 00:28:50.693 06:21:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:50.693 06:21:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134356 00:28:50.952 06:21:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:50.952 06:21:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:50.952 killing process with pid 134356 00:28:50.952 06:21:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134356' 00:28:50.952 06:21:21 -- common/autotest_common.sh@945 -- # kill 134356 00:28:50.952 06:21:21 -- common/autotest_common.sh@950 -- # wait 134356 00:28:53.488 00:28:53.488 real 0m4.271s 00:28:53.488 user 0m4.549s 00:28:53.488 sys 0m0.522s 00:28:53.488 06:21:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.488 06:21:23 -- common/autotest_common.sh@10 -- # set +x 00:28:53.488 ************************************ 00:28:53.488 END TEST bdev_gpt_uuid 00:28:53.488 ************************************ 00:28:53.488 06:21:23 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:28:53.488 06:21:23 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:53.488 06:21:23 -- bdev/blockdev.sh@809 -- # cleanup 00:28:53.488 06:21:23 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:53.488 06:21:23 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:53.488 06:21:23 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:28:53.488 06:21:23 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:28:53.488 06:21:23 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:28:53.488 06:21:23 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:53.488 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:53.488 Waiting for block devices as requested 00:28:53.748 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:53.748 06:21:24 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:28:53.748 06:21:24 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:28:53.748 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:28:53.748 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:28:53.748 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:53.748 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:53.748 06:21:24 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:28:53.748 00:28:53.748 real 0m48.181s 00:28:53.748 user 1m8.215s 00:28:53.748 sys 0m8.032s 00:28:53.748 06:21:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:53.748 06:21:24 -- common/autotest_common.sh@10 -- # set +x 00:28:53.748 ************************************ 00:28:53.748 END TEST blockdev_nvme_gpt 00:28:53.748 ************************************ 00:28:53.748 06:21:24 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:53.748 06:21:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:53.748 06:21:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:53.748 06:21:24 -- common/autotest_common.sh@10 -- # set +x 00:28:53.748 ************************************ 00:28:53.748 START TEST nvme 00:28:53.748 ************************************ 00:28:53.748 06:21:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:54.007 * Looking for test storage... 00:28:54.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:54.007 06:21:24 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:54.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:28:54.577 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:56.484 06:21:27 -- nvme/nvme.sh@79 -- # uname 00:28:56.484 06:21:27 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:28:56.484 06:21:27 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:28:56.484 06:21:27 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:28:56.484 06:21:27 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:28:56.484 06:21:27 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:28:56.484 06:21:27 -- common/autotest_common.sh@1045 -- # echo 0 00:28:56.484 06:21:27 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:28:56.484 06:21:27 -- common/autotest_common.sh@1047 -- # stubpid=134792 00:28:56.484 Waiting for stub to ready for secondary processes... 00:28:56.484 06:21:27 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:28:56.484 06:21:27 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:56.484 06:21:27 -- common/autotest_common.sh@1051 -- # [[ -e /proc/134792 ]] 00:28:56.484 06:21:27 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:56.743 [2024-06-11 06:21:27.160569] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:28:56.743 [2024-06-11 06:21:27.160718] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.681 06:21:28 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:57.682 06:21:28 -- common/autotest_common.sh@1051 -- # [[ -e /proc/134792 ]] 00:28:57.682 06:21:28 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:58.620 [2024-06-11 06:21:28.990333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.620 06:21:29 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:58.620 06:21:29 -- common/autotest_common.sh@1051 -- # [[ -e /proc/134792 ]] 00:28:58.620 06:21:29 -- common/autotest_common.sh@1052 -- # sleep 1s 00:28:58.620 [2024-06-11 06:21:29.202367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.620 [2024-06-11 06:21:29.202553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.620 [2024-06-11 06:21:29.202554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.620 [2024-06-11 06:21:29.224690] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:28:58.620 [2024-06-11 06:21:29.237713] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:28:58.620 [2024-06-11 06:21:29.238426] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:28:59.556 06:21:30 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:59.556 done. 00:28:59.557 06:21:30 -- common/autotest_common.sh@1054 -- # echo done. 00:28:59.557 06:21:30 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:59.557 06:21:30 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:28:59.557 06:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.557 06:21:30 -- common/autotest_common.sh@10 -- # set +x 00:28:59.557 ************************************ 00:28:59.557 START TEST nvme_reset 00:28:59.557 ************************************ 00:28:59.557 06:21:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:29:00.124 Initializing NVMe Controllers 00:29:00.124 Skipping QEMU NVMe SSD at 0000:00:06.0 00:29:00.124 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:29:00.124 00:29:00.124 real 0m0.362s 00:29:00.124 user 0m0.132s 00:29:00.124 sys 0m0.154s 00:29:00.124 06:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.124 06:21:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.124 ************************************ 00:29:00.124 END TEST nvme_reset 00:29:00.124 ************************************ 00:29:00.124 06:21:30 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:29:00.124 06:21:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.124 06:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.124 06:21:30 -- common/autotest_common.sh@10 -- # set +x 00:29:00.124 ************************************ 00:29:00.124 START TEST nvme_identify 00:29:00.124 ************************************ 00:29:00.124 06:21:30 -- common/autotest_common.sh@1104 -- # nvme_identify 00:29:00.124 06:21:30 -- nvme/nvme.sh@12 -- # bdfs=() 00:29:00.124 06:21:30 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:29:00.124 06:21:30 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:29:00.124 06:21:30 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:29:00.124 06:21:30 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:00.124 06:21:30 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:00.124 06:21:30 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:00.124 06:21:30 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:00.125 06:21:30 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:00.125 06:21:30 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:00.125 06:21:30 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:00.125 06:21:30 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:29:00.385 [2024-06-11 06:21:30.908174] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 134843 terminated unexpected 00:29:00.385 ===================================================== 00:29:00.385 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:00.385 ===================================================== 00:29:00.385 Controller Capabilities/Features 00:29:00.385 ================================ 00:29:00.385 Vendor ID: 1b36 00:29:00.385 Subsystem Vendor ID: 1af4 00:29:00.385 Serial Number: 12340 00:29:00.385 Model Number: QEMU NVMe Ctrl 00:29:00.385 Firmware Version: 8.0.0 00:29:00.385 Recommended Arb Burst: 6 00:29:00.385 IEEE OUI Identifier: 00 54 52 00:29:00.385 Multi-path I/O 00:29:00.385 May have multiple subsystem ports: No 00:29:00.385 May have multiple controllers: No 00:29:00.385 Associated with SR-IOV VF: No 00:29:00.385 Max Data Transfer Size: 524288 00:29:00.385 Max Number of Namespaces: 256 00:29:00.385 Max Number of I/O Queues: 64 00:29:00.385 NVMe Specification Version (VS): 1.4 00:29:00.385 NVMe Specification Version (Identify): 1.4 00:29:00.385 Maximum Queue Entries: 2048 00:29:00.385 Contiguous Queues Required: Yes 00:29:00.385 Arbitration Mechanisms Supported 00:29:00.385 Weighted Round Robin: Not Supported 00:29:00.385 Vendor Specific: Not Supported 00:29:00.385 Reset Timeout: 7500 ms 00:29:00.385 Doorbell Stride: 4 bytes 00:29:00.385 NVM Subsystem Reset: Not Supported 00:29:00.385 Command Sets Supported 00:29:00.385 NVM Command Set: Supported 00:29:00.385 Boot Partition: Not Supported 00:29:00.385 Memory Page Size Minimum: 4096 bytes 00:29:00.385 Memory Page Size Maximum: 65536 bytes 00:29:00.385 Persistent Memory Region: Not Supported 00:29:00.385 Optional Asynchronous Events Supported 00:29:00.385 Namespace Attribute Notices: Supported 00:29:00.385 Firmware Activation Notices: Not Supported 00:29:00.385 ANA Change Notices: Not Supported 00:29:00.385 PLE Aggregate Log Change Notices: Not Supported 00:29:00.385 LBA Status Info Alert Notices: Not Supported 00:29:00.385 EGE Aggregate Log Change Notices: Not Supported 00:29:00.385 Normal NVM Subsystem Shutdown event: Not Supported 00:29:00.385 Zone Descriptor Change Notices: Not Supported 00:29:00.385 Discovery Log Change Notices: Not Supported 00:29:00.385 Controller Attributes 00:29:00.385 128-bit Host Identifier: Not Supported 00:29:00.385 Non-Operational Permissive Mode: Not Supported 00:29:00.385 NVM Sets: Not Supported 00:29:00.385 Read Recovery Levels: Not Supported 00:29:00.385 Endurance Groups: Not Supported 00:29:00.385 Predictable Latency Mode: Not Supported 00:29:00.385 Traffic Based Keep ALive: Not Supported 00:29:00.385 Namespace Granularity: Not Supported 00:29:00.385 SQ Associations: Not Supported 00:29:00.385 UUID List: Not Supported 00:29:00.385 Multi-Domain Subsystem: Not Supported 00:29:00.385 Fixed Capacity Management: Not Supported 00:29:00.385 Variable Capacity Management: Not Supported 00:29:00.385 Delete Endurance Group: Not Supported 00:29:00.385 Delete NVM Set: Not Supported 00:29:00.385 Extended LBA Formats Supported: Supported 00:29:00.385 Flexible Data Placement Supported: Not Supported 00:29:00.385 00:29:00.385 Controller Memory Buffer Support 00:29:00.385 ================================ 00:29:00.385 Supported: No 00:29:00.385 00:29:00.385 Persistent Memory Region Support 00:29:00.385 ================================ 00:29:00.385 Supported: No 00:29:00.385 00:29:00.385 Admin Command Set Attributes 00:29:00.385 ============================ 00:29:00.385 Security Send/Receive: Not Supported 00:29:00.385 Format NVM: Supported 00:29:00.385 Firmware Activate/Download: Not Supported 00:29:00.385 Namespace Management: Supported 00:29:00.385 Device Self-Test: Not Supported 00:29:00.385 Directives: Supported 00:29:00.385 NVMe-MI: Not Supported 00:29:00.385 Virtualization Management: Not Supported 00:29:00.385 Doorbell Buffer Config: Supported 00:29:00.385 Get LBA Status Capability: Not Supported 00:29:00.385 Command & Feature Lockdown Capability: Not Supported 00:29:00.385 Abort Command Limit: 4 00:29:00.385 Async Event Request Limit: 4 00:29:00.385 Number of Firmware Slots: N/A 00:29:00.385 Firmware Slot 1 Read-Only: N/A 00:29:00.385 Firmware Activation Without Reset: N/A 00:29:00.385 Multiple Update Detection Support: N/A 00:29:00.385 Firmware Update Granularity: No Information Provided 00:29:00.385 Per-Namespace SMART Log: Yes 00:29:00.385 Asymmetric Namespace Access Log Page: Not Supported 00:29:00.385 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:00.385 Command Effects Log Page: Supported 00:29:00.385 Get Log Page Extended Data: Supported 00:29:00.385 Telemetry Log Pages: Not Supported 00:29:00.385 Persistent Event Log Pages: Not Supported 00:29:00.385 Supported Log Pages Log Page: May Support 00:29:00.385 Commands Supported & Effects Log Page: Not Supported 00:29:00.385 Feature Identifiers & Effects Log Page:May Support 00:29:00.385 NVMe-MI Commands & Effects Log Page: May Support 00:29:00.385 Data Area 4 for Telemetry Log: Not Supported 00:29:00.385 Error Log Page Entries Supported: 1 00:29:00.385 Keep Alive: Not Supported 00:29:00.385 00:29:00.385 NVM Command Set Attributes 00:29:00.385 ========================== 00:29:00.385 Submission Queue Entry Size 00:29:00.385 Max: 64 00:29:00.385 Min: 64 00:29:00.385 Completion Queue Entry Size 00:29:00.385 Max: 16 00:29:00.385 Min: 16 00:29:00.385 Number of Namespaces: 256 00:29:00.385 Compare Command: Supported 00:29:00.385 Write Uncorrectable Command: Not Supported 00:29:00.385 Dataset Management Command: Supported 00:29:00.385 Write Zeroes Command: Supported 00:29:00.385 Set Features Save Field: Supported 00:29:00.385 Reservations: Not Supported 00:29:00.385 Timestamp: Supported 00:29:00.385 Copy: Supported 00:29:00.385 Volatile Write Cache: Present 00:29:00.385 Atomic Write Unit (Normal): 1 00:29:00.385 Atomic Write Unit (PFail): 1 00:29:00.385 Atomic Compare & Write Unit: 1 00:29:00.385 Fused Compare & Write: Not Supported 00:29:00.385 Scatter-Gather List 00:29:00.385 SGL Command Set: Supported 00:29:00.385 SGL Keyed: Not Supported 00:29:00.385 SGL Bit Bucket Descriptor: Not Supported 00:29:00.385 SGL Metadata Pointer: Not Supported 00:29:00.385 Oversized SGL: Not Supported 00:29:00.385 SGL Metadata Address: Not Supported 00:29:00.385 SGL Offset: Not Supported 00:29:00.385 Transport SGL Data Block: Not Supported 00:29:00.385 Replay Protected Memory Block: Not Supported 00:29:00.385 00:29:00.385 Firmware Slot Information 00:29:00.385 ========================= 00:29:00.385 Active slot: 1 00:29:00.385 Slot 1 Firmware Revision: 1.0 00:29:00.385 00:29:00.385 00:29:00.385 Commands Supported and Effects 00:29:00.385 ============================== 00:29:00.385 Admin Commands 00:29:00.385 -------------- 00:29:00.385 Delete I/O Submission Queue (00h): Supported 00:29:00.385 Create I/O Submission Queue (01h): Supported 00:29:00.385 Get Log Page (02h): Supported 00:29:00.385 Delete I/O Completion Queue (04h): Supported 00:29:00.385 Create I/O Completion Queue (05h): Supported 00:29:00.385 Identify (06h): Supported 00:29:00.385 Abort (08h): Supported 00:29:00.385 Set Features (09h): Supported 00:29:00.385 Get Features (0Ah): Supported 00:29:00.385 Asynchronous Event Request (0Ch): Supported 00:29:00.385 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:00.385 Directive Send (19h): Supported 00:29:00.385 Directive Receive (1Ah): Supported 00:29:00.385 Virtualization Management (1Ch): Supported 00:29:00.385 Doorbell Buffer Config (7Ch): Supported 00:29:00.386 Format NVM (80h): Supported LBA-Change 00:29:00.386 I/O Commands 00:29:00.386 ------------ 00:29:00.386 Flush (00h): Supported LBA-Change 00:29:00.386 Write (01h): Supported LBA-Change 00:29:00.386 Read (02h): Supported 00:29:00.386 Compare (05h): Supported 00:29:00.386 Write Zeroes (08h): Supported LBA-Change 00:29:00.386 Dataset Management (09h): Supported LBA-Change 00:29:00.386 Unknown (0Ch): Supported 00:29:00.386 Unknown (12h): Supported 00:29:00.386 Copy (19h): Supported LBA-Change 00:29:00.386 Unknown (1Dh): Supported LBA-Change 00:29:00.386 00:29:00.386 Error Log 00:29:00.386 ========= 00:29:00.386 00:29:00.386 Arbitration 00:29:00.386 =========== 00:29:00.386 Arbitration Burst: no limit 00:29:00.386 00:29:00.386 Power Management 00:29:00.386 ================ 00:29:00.386 Number of Power States: 1 00:29:00.386 Current Power State: Power State #0 00:29:00.386 Power State #0: 00:29:00.386 Max Power: 25.00 W 00:29:00.386 Non-Operational State: Operational 00:29:00.386 Entry Latency: 16 microseconds 00:29:00.386 Exit Latency: 4 microseconds 00:29:00.386 Relative Read Throughput: 0 00:29:00.386 Relative Read Latency: 0 00:29:00.386 Relative Write Throughput: 0 00:29:00.386 Relative Write Latency: 0 00:29:00.386 Idle Power: Not Reported 00:29:00.386 Active Power: Not Reported 00:29:00.386 Non-Operational Permissive Mode: Not Supported 00:29:00.386 00:29:00.386 Health Information 00:29:00.386 ================== 00:29:00.386 Critical Warnings: 00:29:00.386 Available Spare Space: OK 00:29:00.386 Temperature: OK 00:29:00.386 Device Reliability: OK 00:29:00.386 Read Only: No 00:29:00.386 Volatile Memory Backup: OK 00:29:00.386 Current Temperature: 323 Kelvin (50 Celsius) 00:29:00.386 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:00.386 Available Spare: 0% 00:29:00.386 Available Spare Threshold: 0% 00:29:00.386 Life Percentage Used: 0% 00:29:00.386 Data Units Read: 5859 00:29:00.386 Data Units Written: 2834 00:29:00.386 Host Read Commands: 296614 00:29:00.386 Host Write Commands: 163491 00:29:00.386 Controller Busy Time: 0 minutes 00:29:00.386 Power Cycles: 0 00:29:00.386 Power On Hours: 0 hours 00:29:00.386 Unsafe Shutdowns: 0 00:29:00.386 Unrecoverable Media Errors: 0 00:29:00.386 Lifetime Error Log Entries: 0 00:29:00.386 Warning Temperature Time: 0 minutes 00:29:00.386 Critical Temperature Time: 0 minutes 00:29:00.386 00:29:00.386 Number of Queues 00:29:00.386 ================ 00:29:00.386 Number of I/O Submission Queues: 64 00:29:00.386 Number of I/O Completion Queues: 64 00:29:00.386 00:29:00.386 ZNS Specific Controller Data 00:29:00.386 ============================ 00:29:00.386 Zone Append Size Limit: 0 00:29:00.386 00:29:00.386 00:29:00.386 Active Namespaces 00:29:00.386 ================= 00:29:00.386 Namespace ID:1 00:29:00.386 Error Recovery Timeout: Unlimited 00:29:00.386 Command Set Identifier: NVM (00h) 00:29:00.386 Deallocate: Supported 00:29:00.386 Deallocated/Unwritten Error: Supported 00:29:00.386 Deallocated Read Value: All 0x00 00:29:00.386 Deallocate in Write Zeroes: Not Supported 00:29:00.386 Deallocated Guard Field: 0xFFFF 00:29:00.386 Flush: Supported 00:29:00.386 Reservation: Not Supported 00:29:00.386 Namespace Sharing Capabilities: Private 00:29:00.386 Size (in LBAs): 1310720 (5GiB) 00:29:00.386 Capacity (in LBAs): 1310720 (5GiB) 00:29:00.386 Utilization (in LBAs): 1310720 (5GiB) 00:29:00.386 Thin Provisioning: Not Supported 00:29:00.386 Per-NS Atomic Units: No 00:29:00.386 Maximum Single Source Range Length: 128 00:29:00.386 Maximum Copy Length: 128 00:29:00.386 Maximum Source Range Count: 128 00:29:00.386 NGUID/EUI64 Never Reused: No 00:29:00.386 Namespace Write Protected: No 00:29:00.386 Number of LBA Formats: 8 00:29:00.386 Current LBA Format: LBA Format #04 00:29:00.386 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:00.386 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:00.386 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:00.386 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:00.386 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:00.386 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:00.386 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:00.386 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:00.386 00:29:00.386 06:21:30 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:29:00.386 06:21:30 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:29:00.956 ===================================================== 00:29:00.956 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:00.956 ===================================================== 00:29:00.956 Controller Capabilities/Features 00:29:00.956 ================================ 00:29:00.956 Vendor ID: 1b36 00:29:00.956 Subsystem Vendor ID: 1af4 00:29:00.956 Serial Number: 12340 00:29:00.956 Model Number: QEMU NVMe Ctrl 00:29:00.956 Firmware Version: 8.0.0 00:29:00.956 Recommended Arb Burst: 6 00:29:00.956 IEEE OUI Identifier: 00 54 52 00:29:00.956 Multi-path I/O 00:29:00.956 May have multiple subsystem ports: No 00:29:00.956 May have multiple controllers: No 00:29:00.956 Associated with SR-IOV VF: No 00:29:00.956 Max Data Transfer Size: 524288 00:29:00.956 Max Number of Namespaces: 256 00:29:00.956 Max Number of I/O Queues: 64 00:29:00.956 NVMe Specification Version (VS): 1.4 00:29:00.956 NVMe Specification Version (Identify): 1.4 00:29:00.956 Maximum Queue Entries: 2048 00:29:00.956 Contiguous Queues Required: Yes 00:29:00.956 Arbitration Mechanisms Supported 00:29:00.956 Weighted Round Robin: Not Supported 00:29:00.956 Vendor Specific: Not Supported 00:29:00.956 Reset Timeout: 7500 ms 00:29:00.956 Doorbell Stride: 4 bytes 00:29:00.956 NVM Subsystem Reset: Not Supported 00:29:00.956 Command Sets Supported 00:29:00.956 NVM Command Set: Supported 00:29:00.956 Boot Partition: Not Supported 00:29:00.956 Memory Page Size Minimum: 4096 bytes 00:29:00.956 Memory Page Size Maximum: 65536 bytes 00:29:00.956 Persistent Memory Region: Not Supported 00:29:00.956 Optional Asynchronous Events Supported 00:29:00.956 Namespace Attribute Notices: Supported 00:29:00.956 Firmware Activation Notices: Not Supported 00:29:00.956 ANA Change Notices: Not Supported 00:29:00.956 PLE Aggregate Log Change Notices: Not Supported 00:29:00.956 LBA Status Info Alert Notices: Not Supported 00:29:00.956 EGE Aggregate Log Change Notices: Not Supported 00:29:00.956 Normal NVM Subsystem Shutdown event: Not Supported 00:29:00.956 Zone Descriptor Change Notices: Not Supported 00:29:00.956 Discovery Log Change Notices: Not Supported 00:29:00.956 Controller Attributes 00:29:00.956 128-bit Host Identifier: Not Supported 00:29:00.956 Non-Operational Permissive Mode: Not Supported 00:29:00.956 NVM Sets: Not Supported 00:29:00.956 Read Recovery Levels: Not Supported 00:29:00.956 Endurance Groups: Not Supported 00:29:00.956 Predictable Latency Mode: Not Supported 00:29:00.956 Traffic Based Keep ALive: Not Supported 00:29:00.956 Namespace Granularity: Not Supported 00:29:00.956 SQ Associations: Not Supported 00:29:00.956 UUID List: Not Supported 00:29:00.956 Multi-Domain Subsystem: Not Supported 00:29:00.956 Fixed Capacity Management: Not Supported 00:29:00.956 Variable Capacity Management: Not Supported 00:29:00.956 Delete Endurance Group: Not Supported 00:29:00.956 Delete NVM Set: Not Supported 00:29:00.956 Extended LBA Formats Supported: Supported 00:29:00.956 Flexible Data Placement Supported: Not Supported 00:29:00.956 00:29:00.956 Controller Memory Buffer Support 00:29:00.956 ================================ 00:29:00.956 Supported: No 00:29:00.956 00:29:00.956 Persistent Memory Region Support 00:29:00.956 ================================ 00:29:00.956 Supported: No 00:29:00.956 00:29:00.956 Admin Command Set Attributes 00:29:00.956 ============================ 00:29:00.956 Security Send/Receive: Not Supported 00:29:00.956 Format NVM: Supported 00:29:00.956 Firmware Activate/Download: Not Supported 00:29:00.956 Namespace Management: Supported 00:29:00.956 Device Self-Test: Not Supported 00:29:00.956 Directives: Supported 00:29:00.956 NVMe-MI: Not Supported 00:29:00.956 Virtualization Management: Not Supported 00:29:00.956 Doorbell Buffer Config: Supported 00:29:00.956 Get LBA Status Capability: Not Supported 00:29:00.956 Command & Feature Lockdown Capability: Not Supported 00:29:00.956 Abort Command Limit: 4 00:29:00.956 Async Event Request Limit: 4 00:29:00.957 Number of Firmware Slots: N/A 00:29:00.957 Firmware Slot 1 Read-Only: N/A 00:29:00.957 Firmware Activation Without Reset: N/A 00:29:00.957 Multiple Update Detection Support: N/A 00:29:00.957 Firmware Update Granularity: No Information Provided 00:29:00.957 Per-Namespace SMART Log: Yes 00:29:00.957 Asymmetric Namespace Access Log Page: Not Supported 00:29:00.957 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:29:00.957 Command Effects Log Page: Supported 00:29:00.957 Get Log Page Extended Data: Supported 00:29:00.957 Telemetry Log Pages: Not Supported 00:29:00.957 Persistent Event Log Pages: Not Supported 00:29:00.957 Supported Log Pages Log Page: May Support 00:29:00.957 Commands Supported & Effects Log Page: Not Supported 00:29:00.957 Feature Identifiers & Effects Log Page:May Support 00:29:00.957 NVMe-MI Commands & Effects Log Page: May Support 00:29:00.957 Data Area 4 for Telemetry Log: Not Supported 00:29:00.957 Error Log Page Entries Supported: 1 00:29:00.957 Keep Alive: Not Supported 00:29:00.957 00:29:00.957 NVM Command Set Attributes 00:29:00.957 ========================== 00:29:00.957 Submission Queue Entry Size 00:29:00.957 Max: 64 00:29:00.957 Min: 64 00:29:00.957 Completion Queue Entry Size 00:29:00.957 Max: 16 00:29:00.957 Min: 16 00:29:00.957 Number of Namespaces: 256 00:29:00.957 Compare Command: Supported 00:29:00.957 Write Uncorrectable Command: Not Supported 00:29:00.957 Dataset Management Command: Supported 00:29:00.957 Write Zeroes Command: Supported 00:29:00.957 Set Features Save Field: Supported 00:29:00.957 Reservations: Not Supported 00:29:00.957 Timestamp: Supported 00:29:00.957 Copy: Supported 00:29:00.957 Volatile Write Cache: Present 00:29:00.957 Atomic Write Unit (Normal): 1 00:29:00.957 Atomic Write Unit (PFail): 1 00:29:00.957 Atomic Compare & Write Unit: 1 00:29:00.957 Fused Compare & Write: Not Supported 00:29:00.957 Scatter-Gather List 00:29:00.957 SGL Command Set: Supported 00:29:00.957 SGL Keyed: Not Supported 00:29:00.957 SGL Bit Bucket Descriptor: Not Supported 00:29:00.957 SGL Metadata Pointer: Not Supported 00:29:00.957 Oversized SGL: Not Supported 00:29:00.957 SGL Metadata Address: Not Supported 00:29:00.957 SGL Offset: Not Supported 00:29:00.957 Transport SGL Data Block: Not Supported 00:29:00.957 Replay Protected Memory Block: Not Supported 00:29:00.957 00:29:00.957 Firmware Slot Information 00:29:00.957 ========================= 00:29:00.957 Active slot: 1 00:29:00.957 Slot 1 Firmware Revision: 1.0 00:29:00.957 00:29:00.957 00:29:00.957 Commands Supported and Effects 00:29:00.957 ============================== 00:29:00.957 Admin Commands 00:29:00.957 -------------- 00:29:00.957 Delete I/O Submission Queue (00h): Supported 00:29:00.957 Create I/O Submission Queue (01h): Supported 00:29:00.957 Get Log Page (02h): Supported 00:29:00.957 Delete I/O Completion Queue (04h): Supported 00:29:00.957 Create I/O Completion Queue (05h): Supported 00:29:00.957 Identify (06h): Supported 00:29:00.957 Abort (08h): Supported 00:29:00.957 Set Features (09h): Supported 00:29:00.957 Get Features (0Ah): Supported 00:29:00.957 Asynchronous Event Request (0Ch): Supported 00:29:00.957 Namespace Attachment (15h): Supported NS-Inventory-Change 00:29:00.957 Directive Send (19h): Supported 00:29:00.957 Directive Receive (1Ah): Supported 00:29:00.957 Virtualization Management (1Ch): Supported 00:29:00.957 Doorbell Buffer Config (7Ch): Supported 00:29:00.957 Format NVM (80h): Supported LBA-Change 00:29:00.957 I/O Commands 00:29:00.957 ------------ 00:29:00.957 Flush (00h): Supported LBA-Change 00:29:00.957 Write (01h): Supported LBA-Change 00:29:00.957 Read (02h): Supported 00:29:00.957 Compare (05h): Supported 00:29:00.957 Write Zeroes (08h): Supported LBA-Change 00:29:00.957 Dataset Management (09h): Supported LBA-Change 00:29:00.957 Unknown (0Ch): Supported 00:29:00.957 Unknown (12h): Supported 00:29:00.957 Copy (19h): Supported LBA-Change 00:29:00.957 Unknown (1Dh): Supported LBA-Change 00:29:00.957 00:29:00.957 Error Log 00:29:00.957 ========= 00:29:00.957 00:29:00.957 Arbitration 00:29:00.957 =========== 00:29:00.957 Arbitration Burst: no limit 00:29:00.957 00:29:00.957 Power Management 00:29:00.957 ================ 00:29:00.957 Number of Power States: 1 00:29:00.957 Current Power State: Power State #0 00:29:00.957 Power State #0: 00:29:00.957 Max Power: 25.00 W 00:29:00.957 Non-Operational State: Operational 00:29:00.957 Entry Latency: 16 microseconds 00:29:00.957 Exit Latency: 4 microseconds 00:29:00.957 Relative Read Throughput: 0 00:29:00.957 Relative Read Latency: 0 00:29:00.957 Relative Write Throughput: 0 00:29:00.957 Relative Write Latency: 0 00:29:00.957 Idle Power: Not Reported 00:29:00.957 Active Power: Not Reported 00:29:00.957 Non-Operational Permissive Mode: Not Supported 00:29:00.957 00:29:00.957 Health Information 00:29:00.957 ================== 00:29:00.957 Critical Warnings: 00:29:00.957 Available Spare Space: OK 00:29:00.957 Temperature: OK 00:29:00.957 Device Reliability: OK 00:29:00.957 Read Only: No 00:29:00.957 Volatile Memory Backup: OK 00:29:00.957 Current Temperature: 323 Kelvin (50 Celsius) 00:29:00.957 Temperature Threshold: 343 Kelvin (70 Celsius) 00:29:00.957 Available Spare: 0% 00:29:00.957 Available Spare Threshold: 0% 00:29:00.957 Life Percentage Used: 0% 00:29:00.957 Data Units Read: 5859 00:29:00.957 Data Units Written: 2834 00:29:00.957 Host Read Commands: 296614 00:29:00.957 Host Write Commands: 163491 00:29:00.957 Controller Busy Time: 0 minutes 00:29:00.957 Power Cycles: 0 00:29:00.957 Power On Hours: 0 hours 00:29:00.957 Unsafe Shutdowns: 0 00:29:00.957 Unrecoverable Media Errors: 0 00:29:00.957 Lifetime Error Log Entries: 0 00:29:00.957 Warning Temperature Time: 0 minutes 00:29:00.957 Critical Temperature Time: 0 minutes 00:29:00.957 00:29:00.957 Number of Queues 00:29:00.957 ================ 00:29:00.957 Number of I/O Submission Queues: 64 00:29:00.957 Number of I/O Completion Queues: 64 00:29:00.957 00:29:00.957 ZNS Specific Controller Data 00:29:00.957 ============================ 00:29:00.957 Zone Append Size Limit: 0 00:29:00.957 00:29:00.957 00:29:00.957 Active Namespaces 00:29:00.957 ================= 00:29:00.957 Namespace ID:1 00:29:00.957 Error Recovery Timeout: Unlimited 00:29:00.957 Command Set Identifier: NVM (00h) 00:29:00.957 Deallocate: Supported 00:29:00.957 Deallocated/Unwritten Error: Supported 00:29:00.957 Deallocated Read Value: All 0x00 00:29:00.957 Deallocate in Write Zeroes: Not Supported 00:29:00.957 Deallocated Guard Field: 0xFFFF 00:29:00.957 Flush: Supported 00:29:00.957 Reservation: Not Supported 00:29:00.957 Namespace Sharing Capabilities: Private 00:29:00.957 Size (in LBAs): 1310720 (5GiB) 00:29:00.957 Capacity (in LBAs): 1310720 (5GiB) 00:29:00.957 Utilization (in LBAs): 1310720 (5GiB) 00:29:00.957 Thin Provisioning: Not Supported 00:29:00.957 Per-NS Atomic Units: No 00:29:00.957 Maximum Single Source Range Length: 128 00:29:00.957 Maximum Copy Length: 128 00:29:00.957 Maximum Source Range Count: 128 00:29:00.957 NGUID/EUI64 Never Reused: No 00:29:00.957 Namespace Write Protected: No 00:29:00.957 Number of LBA Formats: 8 00:29:00.957 Current LBA Format: LBA Format #04 00:29:00.957 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:00.957 LBA Format #01: Data Size: 512 Metadata Size: 8 00:29:00.957 LBA Format #02: Data Size: 512 Metadata Size: 16 00:29:00.957 LBA Format #03: Data Size: 512 Metadata Size: 64 00:29:00.957 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:29:00.957 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:29:00.957 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:29:00.958 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:29:00.958 00:29:00.958 00:29:00.958 real 0m0.792s 00:29:00.958 user 0m0.310s 00:29:00.958 sys 0m0.376s 00:29:00.958 06:21:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:00.958 06:21:31 -- common/autotest_common.sh@10 -- # set +x 00:29:00.958 ************************************ 00:29:00.958 END TEST nvme_identify 00:29:00.958 ************************************ 00:29:00.958 06:21:31 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:29:00.958 06:21:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:00.958 06:21:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:00.958 06:21:31 -- common/autotest_common.sh@10 -- # set +x 00:29:00.958 ************************************ 00:29:00.958 START TEST nvme_perf 00:29:00.958 ************************************ 00:29:00.958 06:21:31 -- common/autotest_common.sh@1104 -- # nvme_perf 00:29:00.958 06:21:31 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:29:02.335 Initializing NVMe Controllers 00:29:02.335 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:02.335 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:02.335 Initialization complete. Launching workers. 00:29:02.335 ======================================================== 00:29:02.335 Latency(us) 00:29:02.335 Device Information : IOPS MiB/s Average min max 00:29:02.335 PCIE (0000:00:06.0) NSID 1 from core 0: 52223.95 612.00 2449.66 1276.87 8389.74 00:29:02.335 ======================================================== 00:29:02.335 Total : 52223.95 612.00 2449.66 1276.87 8389.74 00:29:02.335 00:29:02.335 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:02.335 ================================================================================= 00:29:02.335 1.00000% : 1474.560us 00:29:02.335 10.00000% : 1677.410us 00:29:02.335 25.00000% : 1950.476us 00:29:02.335 50.00000% : 2434.194us 00:29:02.335 75.00000% : 2917.912us 00:29:02.335 90.00000% : 3183.177us 00:29:02.335 95.00000% : 3386.027us 00:29:02.335 98.00000% : 3698.103us 00:29:02.335 99.00000% : 3838.537us 00:29:02.335 99.50000% : 3947.764us 00:29:02.335 99.90000% : 6397.562us 00:29:02.335 99.99000% : 8176.396us 00:29:02.335 99.99900% : 8426.057us 00:29:02.335 99.99990% : 8426.057us 00:29:02.335 99.99999% : 8426.057us 00:29:02.335 00:29:02.335 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:02.335 ============================================================================== 00:29:02.335 Range in us Cumulative IO count 00:29:02.335 1271.710 - 1279.512: 0.0038% ( 2) 00:29:02.335 1279.512 - 1287.314: 0.0077% ( 2) 00:29:02.335 1295.116 - 1302.918: 0.0096% ( 1) 00:29:02.335 1302.918 - 1310.720: 0.0115% ( 1) 00:29:02.335 1310.720 - 1318.522: 0.0134% ( 1) 00:29:02.335 1318.522 - 1326.324: 0.0211% ( 4) 00:29:02.335 1326.324 - 1334.126: 0.0249% ( 2) 00:29:02.335 1334.126 - 1341.928: 0.0421% ( 9) 00:29:02.335 1341.928 - 1349.730: 0.0536% ( 6) 00:29:02.335 1349.730 - 1357.531: 0.0728% ( 10) 00:29:02.335 1357.531 - 1365.333: 0.1053% ( 17) 00:29:02.335 1365.333 - 1373.135: 0.1187% ( 7) 00:29:02.335 1373.135 - 1380.937: 0.1417% ( 12) 00:29:02.335 1380.937 - 1388.739: 0.1800% ( 20) 00:29:02.335 1388.739 - 1396.541: 0.2183% ( 20) 00:29:02.335 1396.541 - 1404.343: 0.2681% ( 26) 00:29:02.335 1404.343 - 1412.145: 0.3179% ( 26) 00:29:02.335 1412.145 - 1419.947: 0.3715% ( 28) 00:29:02.335 1419.947 - 1427.749: 0.4328% ( 32) 00:29:02.335 1427.749 - 1435.550: 0.4959% ( 33) 00:29:02.335 1435.550 - 1443.352: 0.6032% ( 56) 00:29:02.335 1443.352 - 1451.154: 0.7219% ( 62) 00:29:02.335 1451.154 - 1458.956: 0.8387% ( 61) 00:29:02.335 1458.956 - 1466.758: 0.9727% ( 70) 00:29:02.335 1466.758 - 1474.560: 1.1757% ( 106) 00:29:02.335 1474.560 - 1482.362: 1.3883% ( 111) 00:29:02.335 1482.362 - 1490.164: 1.6391% ( 131) 00:29:02.335 1490.164 - 1497.966: 1.8746% ( 123) 00:29:02.335 1497.966 - 1505.768: 2.1523% ( 145) 00:29:02.335 1505.768 - 1513.570: 2.4472% ( 154) 00:29:02.335 1513.570 - 1521.371: 2.7650% ( 166) 00:29:02.335 1521.371 - 1529.173: 3.0905% ( 170) 00:29:02.335 1529.173 - 1536.975: 3.4275% ( 176) 00:29:02.335 1536.975 - 1544.777: 3.7339% ( 160) 00:29:02.335 1544.777 - 1552.579: 4.0633% ( 172) 00:29:02.335 1552.579 - 1560.381: 4.3984% ( 175) 00:29:02.335 1560.381 - 1568.183: 4.7392% ( 178) 00:29:02.335 1568.183 - 1575.985: 5.0686% ( 172) 00:29:02.335 1575.985 - 1583.787: 5.4764% ( 213) 00:29:02.335 1583.787 - 1591.589: 5.8804% ( 211) 00:29:02.335 1591.589 - 1599.390: 6.2443% ( 190) 00:29:02.335 1599.390 - 1607.192: 6.6311% ( 202) 00:29:02.335 1607.192 - 1614.994: 7.0102% ( 198) 00:29:02.335 1614.994 - 1622.796: 7.3817% ( 194) 00:29:02.335 1622.796 - 1630.598: 7.8144% ( 226) 00:29:02.335 1630.598 - 1638.400: 8.1936% ( 198) 00:29:02.335 1638.400 - 1646.202: 8.5842% ( 204) 00:29:02.335 1646.202 - 1654.004: 8.9863% ( 210) 00:29:02.335 1654.004 - 1661.806: 9.4018% ( 217) 00:29:02.335 1661.806 - 1669.608: 9.7924% ( 204) 00:29:02.335 1669.608 - 1677.410: 10.2214% ( 224) 00:29:02.335 1677.410 - 1685.211: 10.6464% ( 222) 00:29:02.335 1685.211 - 1693.013: 11.0773% ( 225) 00:29:02.335 1693.013 - 1700.815: 11.5062% ( 224) 00:29:02.335 1700.815 - 1708.617: 11.9351% ( 224) 00:29:02.335 1708.617 - 1716.419: 12.3851% ( 235) 00:29:02.335 1716.419 - 1724.221: 12.8102% ( 222) 00:29:02.335 1724.221 - 1732.023: 13.2334% ( 221) 00:29:02.335 1732.023 - 1739.825: 13.6700% ( 228) 00:29:02.335 1739.825 - 1747.627: 14.0874% ( 218) 00:29:02.335 1747.627 - 1755.429: 14.5125% ( 222) 00:29:02.335 1755.429 - 1763.230: 14.9452% ( 226) 00:29:02.335 1763.230 - 1771.032: 15.3684% ( 221) 00:29:02.335 1771.032 - 1778.834: 15.8414% ( 247) 00:29:02.335 1778.834 - 1786.636: 16.2646% ( 221) 00:29:02.335 1786.636 - 1794.438: 16.6762% ( 215) 00:29:02.335 1794.438 - 1802.240: 17.1243% ( 234) 00:29:02.335 1802.240 - 1810.042: 17.5379% ( 216) 00:29:02.335 1810.042 - 1817.844: 17.9898% ( 236) 00:29:02.335 1817.844 - 1825.646: 18.3996% ( 214) 00:29:02.335 1825.646 - 1833.448: 18.8419% ( 231) 00:29:02.335 1833.448 - 1841.250: 19.2536% ( 215) 00:29:02.335 1841.250 - 1849.051: 19.7036% ( 235) 00:29:02.335 1849.051 - 1856.853: 20.1229% ( 219) 00:29:02.335 1856.853 - 1864.655: 20.5384% ( 217) 00:29:02.335 1864.655 - 1872.457: 20.9865% ( 234) 00:29:02.335 1872.457 - 1880.259: 21.3714% ( 201) 00:29:02.335 1880.259 - 1888.061: 21.8118% ( 230) 00:29:02.335 1888.061 - 1895.863: 22.2158% ( 211) 00:29:02.335 1895.863 - 1903.665: 22.6275% ( 215) 00:29:02.335 1903.665 - 1911.467: 23.0526% ( 222) 00:29:02.335 1911.467 - 1919.269: 23.4681% ( 217) 00:29:02.335 1919.269 - 1927.070: 23.8990% ( 225) 00:29:02.335 1927.070 - 1934.872: 24.3317% ( 226) 00:29:02.335 1934.872 - 1942.674: 24.7358% ( 211) 00:29:02.335 1942.674 - 1950.476: 25.1838% ( 234) 00:29:02.335 1950.476 - 1958.278: 25.5725% ( 203) 00:29:02.335 1958.278 - 1966.080: 25.9861% ( 216) 00:29:02.335 1966.080 - 1973.882: 26.4131% ( 223) 00:29:02.335 1973.882 - 1981.684: 26.8306% ( 218) 00:29:02.335 1981.684 - 1989.486: 27.2576% ( 223) 00:29:02.335 1989.486 - 1997.288: 27.6693% ( 215) 00:29:02.335 1997.288 - 2012.891: 28.5175% ( 443) 00:29:02.335 2012.891 - 2028.495: 29.3256% ( 422) 00:29:02.335 2028.495 - 2044.099: 30.1337% ( 422) 00:29:02.335 2044.099 - 2059.703: 30.9551% ( 429) 00:29:02.335 2059.703 - 2075.307: 31.7613% ( 421) 00:29:02.335 2075.307 - 2090.910: 32.5827% ( 429) 00:29:02.335 2090.910 - 2106.514: 33.4138% ( 434) 00:29:02.335 2106.514 - 2122.118: 34.2046% ( 413) 00:29:02.335 2122.118 - 2137.722: 35.0280% ( 430) 00:29:02.335 2137.722 - 2153.326: 35.8513% ( 430) 00:29:02.335 2153.326 - 2168.930: 36.6268% ( 405) 00:29:02.335 2168.930 - 2184.533: 37.4215% ( 415) 00:29:02.335 2184.533 - 2200.137: 38.2047% ( 409) 00:29:02.335 2200.137 - 2215.741: 39.0108% ( 421) 00:29:02.335 2215.741 - 2231.345: 39.8169% ( 421) 00:29:02.335 2231.345 - 2246.949: 40.6269% ( 423) 00:29:02.335 2246.949 - 2262.552: 41.4426% ( 426) 00:29:02.335 2262.552 - 2278.156: 42.2239% ( 408) 00:29:02.335 2278.156 - 2293.760: 43.0205% ( 416) 00:29:02.335 2293.760 - 2309.364: 43.8074% ( 411) 00:29:02.335 2309.364 - 2324.968: 44.6059% ( 417) 00:29:02.335 2324.968 - 2340.571: 45.4025% ( 416) 00:29:02.335 2340.571 - 2356.175: 46.1876% ( 410) 00:29:02.335 2356.175 - 2371.779: 46.9822% ( 415) 00:29:02.335 2371.779 - 2387.383: 47.7711% ( 412) 00:29:02.335 2387.383 - 2402.987: 48.5715% ( 418) 00:29:02.335 2402.987 - 2418.590: 49.3873% ( 426) 00:29:02.335 2418.590 - 2434.194: 50.1685% ( 408) 00:29:02.335 2434.194 - 2449.798: 50.9498% ( 408) 00:29:02.335 2449.798 - 2465.402: 51.7119% ( 398) 00:29:02.335 2465.402 - 2481.006: 52.5123% ( 418) 00:29:02.335 2481.006 - 2496.610: 53.3146% ( 419) 00:29:02.335 2496.610 - 2512.213: 54.1475% ( 435) 00:29:02.335 2512.213 - 2527.817: 54.9498% ( 419) 00:29:02.335 2527.817 - 2543.421: 55.7349% ( 410) 00:29:02.335 2543.421 - 2559.025: 56.5391% ( 420) 00:29:02.336 2559.025 - 2574.629: 57.3644% ( 431) 00:29:02.336 2574.629 - 2590.232: 58.1840% ( 428) 00:29:02.336 2590.232 - 2605.836: 58.9939% ( 423) 00:29:02.336 2605.836 - 2621.440: 59.8020% ( 422) 00:29:02.336 2621.440 - 2637.044: 60.6101% ( 422) 00:29:02.336 2637.044 - 2652.648: 61.4315% ( 429) 00:29:02.336 2652.648 - 2668.251: 62.2549% ( 430) 00:29:02.336 2668.251 - 2683.855: 63.0630% ( 422) 00:29:02.336 2683.855 - 2699.459: 63.8863% ( 430) 00:29:02.336 2699.459 - 2715.063: 64.7346% ( 443) 00:29:02.336 2715.063 - 2730.667: 65.5656% ( 434) 00:29:02.336 2730.667 - 2746.270: 66.3967% ( 434) 00:29:02.336 2746.270 - 2761.874: 67.2641% ( 453) 00:29:02.336 2761.874 - 2777.478: 68.0951% ( 434) 00:29:02.336 2777.478 - 2793.082: 68.9166% ( 429) 00:29:02.336 2793.082 - 2808.686: 69.7553% ( 438) 00:29:02.336 2808.686 - 2824.290: 70.6093% ( 446) 00:29:02.336 2824.290 - 2839.893: 71.4480% ( 438) 00:29:02.336 2839.893 - 2855.497: 72.3020% ( 446) 00:29:02.336 2855.497 - 2871.101: 73.1350% ( 435) 00:29:02.336 2871.101 - 2886.705: 73.9985% ( 451) 00:29:02.336 2886.705 - 2902.309: 74.8621% ( 451) 00:29:02.336 2902.309 - 2917.912: 75.7123% ( 444) 00:29:02.336 2917.912 - 2933.516: 76.5663% ( 446) 00:29:02.336 2933.516 - 2949.120: 77.4510% ( 462) 00:29:02.336 2949.120 - 2964.724: 78.3299% ( 459) 00:29:02.336 2964.724 - 2980.328: 79.1992% ( 454) 00:29:02.336 2980.328 - 2995.931: 80.0686% ( 454) 00:29:02.336 2995.931 - 3011.535: 80.9417% ( 456) 00:29:02.336 3011.535 - 3027.139: 81.8283% ( 463) 00:29:02.336 3027.139 - 3042.743: 82.6957% ( 453) 00:29:02.336 3042.743 - 3058.347: 83.5631% ( 453) 00:29:02.336 3058.347 - 3073.950: 84.4554% ( 466) 00:29:02.336 3073.950 - 3089.554: 85.3248% ( 454) 00:29:02.336 3089.554 - 3105.158: 86.1749% ( 444) 00:29:02.336 3105.158 - 3120.762: 87.0117% ( 437) 00:29:02.336 3120.762 - 3136.366: 87.8887% ( 458) 00:29:02.336 3136.366 - 3151.970: 88.6872% ( 417) 00:29:02.336 3151.970 - 3167.573: 89.4627% ( 405) 00:29:02.336 3167.573 - 3183.177: 90.2248% ( 398) 00:29:02.336 3183.177 - 3198.781: 90.9658% ( 387) 00:29:02.336 3198.781 - 3214.385: 91.6111% ( 337) 00:29:02.336 3214.385 - 3229.989: 92.1645% ( 289) 00:29:02.336 3229.989 - 3245.592: 92.6662% ( 262) 00:29:02.336 3245.592 - 3261.196: 93.0990% ( 226) 00:29:02.336 3261.196 - 3276.800: 93.4532% ( 185) 00:29:02.336 3276.800 - 3292.404: 93.7596% ( 160) 00:29:02.336 3292.404 - 3308.008: 94.0353% ( 144) 00:29:02.336 3308.008 - 3323.611: 94.2708% ( 123) 00:29:02.336 3323.611 - 3339.215: 94.4738% ( 106) 00:29:02.336 3339.215 - 3354.819: 94.6710% ( 103) 00:29:02.336 3354.819 - 3370.423: 94.8491% ( 93) 00:29:02.336 3370.423 - 3386.027: 95.0329% ( 96) 00:29:02.336 3386.027 - 3401.630: 95.2034% ( 89) 00:29:02.336 3401.630 - 3417.234: 95.3585% ( 81) 00:29:02.336 3417.234 - 3432.838: 95.5097% ( 79) 00:29:02.336 3432.838 - 3448.442: 95.6667% ( 82) 00:29:02.336 3448.442 - 3464.046: 95.8161% ( 78) 00:29:02.336 3464.046 - 3479.650: 95.9712% ( 81) 00:29:02.336 3479.650 - 3495.253: 96.1301% ( 83) 00:29:02.336 3495.253 - 3510.857: 96.2833% ( 80) 00:29:02.336 3510.857 - 3526.461: 96.4269% ( 75) 00:29:02.336 3526.461 - 3542.065: 96.5820% ( 81) 00:29:02.336 3542.065 - 3557.669: 96.7256% ( 75) 00:29:02.336 3557.669 - 3573.272: 96.8693% ( 75) 00:29:02.336 3573.272 - 3588.876: 97.0167% ( 77) 00:29:02.336 3588.876 - 3604.480: 97.1641% ( 77) 00:29:02.336 3604.480 - 3620.084: 97.3135% ( 78) 00:29:02.336 3620.084 - 3635.688: 97.4609% ( 77) 00:29:02.336 3635.688 - 3651.291: 97.6122% ( 79) 00:29:02.336 3651.291 - 3666.895: 97.7577% ( 76) 00:29:02.336 3666.895 - 3682.499: 97.8918% ( 70) 00:29:02.336 3682.499 - 3698.103: 98.0335% ( 74) 00:29:02.336 3698.103 - 3713.707: 98.1733% ( 73) 00:29:02.336 3713.707 - 3729.310: 98.3073% ( 70) 00:29:02.336 3729.310 - 3744.914: 98.4356% ( 67) 00:29:02.336 3744.914 - 3760.518: 98.5562% ( 63) 00:29:02.336 3760.518 - 3776.122: 98.6692% ( 59) 00:29:02.336 3776.122 - 3791.726: 98.7783% ( 57) 00:29:02.336 3791.726 - 3807.330: 98.8741% ( 50) 00:29:02.336 3807.330 - 3822.933: 98.9602% ( 45) 00:29:02.336 3822.933 - 3838.537: 99.0445% ( 44) 00:29:02.336 3838.537 - 3854.141: 99.1211% ( 40) 00:29:02.336 3854.141 - 3869.745: 99.1958% ( 39) 00:29:02.336 3869.745 - 3885.349: 99.2743% ( 41) 00:29:02.336 3885.349 - 3900.952: 99.3470% ( 38) 00:29:02.336 3900.952 - 3916.556: 99.4121% ( 34) 00:29:02.336 3916.556 - 3932.160: 99.4677% ( 29) 00:29:02.336 3932.160 - 3947.764: 99.5155% ( 25) 00:29:02.336 3947.764 - 3963.368: 99.5558% ( 21) 00:29:02.336 3963.368 - 3978.971: 99.5826% ( 14) 00:29:02.336 3978.971 - 3994.575: 99.6055% ( 12) 00:29:02.336 3994.575 - 4025.783: 99.6362% ( 16) 00:29:02.336 4025.783 - 4056.990: 99.6515% ( 8) 00:29:02.336 4056.990 - 4088.198: 99.6611% ( 5) 00:29:02.336 4088.198 - 4119.406: 99.6706% ( 5) 00:29:02.336 4119.406 - 4150.613: 99.6783% ( 4) 00:29:02.336 4150.613 - 4181.821: 99.6841% ( 3) 00:29:02.336 4181.821 - 4213.029: 99.6879% ( 2) 00:29:02.336 4213.029 - 4244.236: 99.6936% ( 3) 00:29:02.336 4244.236 - 4275.444: 99.6975% ( 2) 00:29:02.336 4275.444 - 4306.651: 99.7032% ( 3) 00:29:02.336 4306.651 - 4337.859: 99.7070% ( 2) 00:29:02.336 4337.859 - 4369.067: 99.7128% ( 3) 00:29:02.336 4369.067 - 4400.274: 99.7185% ( 3) 00:29:02.336 4400.274 - 4431.482: 99.7204% ( 1) 00:29:02.336 4431.482 - 4462.690: 99.7262% ( 3) 00:29:02.336 4462.690 - 4493.897: 99.7319% ( 3) 00:29:02.336 4493.897 - 4525.105: 99.7377% ( 3) 00:29:02.336 4525.105 - 4556.312: 99.7396% ( 1) 00:29:02.336 4556.312 - 4587.520: 99.7453% ( 3) 00:29:02.336 4587.520 - 4618.728: 99.7511% ( 3) 00:29:02.336 4618.728 - 4649.935: 99.7568% ( 3) 00:29:02.336 4649.935 - 4681.143: 99.7606% ( 2) 00:29:02.336 4681.143 - 4712.350: 99.7645% ( 2) 00:29:02.336 4712.350 - 4743.558: 99.7702% ( 3) 00:29:02.336 4743.558 - 4774.766: 99.7760% ( 3) 00:29:02.336 4774.766 - 4805.973: 99.7817% ( 3) 00:29:02.336 4805.973 - 4837.181: 99.7855% ( 2) 00:29:02.336 4837.181 - 4868.389: 99.7913% ( 3) 00:29:02.336 4868.389 - 4899.596: 99.7970% ( 3) 00:29:02.336 4899.596 - 4930.804: 99.8009% ( 2) 00:29:02.336 4930.804 - 4962.011: 99.8047% ( 2) 00:29:02.336 4962.011 - 4993.219: 99.8104% ( 3) 00:29:02.336 4993.219 - 5024.427: 99.8162% ( 3) 00:29:02.336 5024.427 - 5055.634: 99.8219% ( 3) 00:29:02.336 5055.634 - 5086.842: 99.8277% ( 3) 00:29:02.336 5086.842 - 5118.050: 99.8296% ( 1) 00:29:02.336 5118.050 - 5149.257: 99.8353% ( 3) 00:29:02.336 5149.257 - 5180.465: 99.8411% ( 3) 00:29:02.336 5180.465 - 5211.672: 99.8430% ( 1) 00:29:02.336 5211.672 - 5242.880: 99.8449% ( 1) 00:29:02.336 5242.880 - 5274.088: 99.8468% ( 1) 00:29:02.336 5305.295 - 5336.503: 99.8487% ( 1) 00:29:02.336 5336.503 - 5367.710: 99.8506% ( 1) 00:29:02.336 5398.918 - 5430.126: 99.8526% ( 1) 00:29:02.336 5430.126 - 5461.333: 99.8545% ( 1) 00:29:02.336 5461.333 - 5492.541: 99.8564% ( 1) 00:29:02.336 5492.541 - 5523.749: 99.8583% ( 1) 00:29:02.336 5523.749 - 5554.956: 99.8602% ( 1) 00:29:02.336 5554.956 - 5586.164: 99.8621% ( 1) 00:29:02.336 5617.371 - 5648.579: 99.8640% ( 1) 00:29:02.336 5648.579 - 5679.787: 99.8660% ( 1) 00:29:02.336 5679.787 - 5710.994: 99.8679% ( 1) 00:29:02.336 5710.994 - 5742.202: 99.8698% ( 1) 00:29:02.336 5742.202 - 5773.410: 99.8717% ( 1) 00:29:02.336 5804.617 - 5835.825: 99.8736% ( 1) 00:29:02.336 5835.825 - 5867.032: 99.8755% ( 1) 00:29:02.336 5867.032 - 5898.240: 99.8775% ( 1) 00:29:02.336 5898.240 - 5929.448: 99.8794% ( 1) 00:29:02.336 5960.655 - 5991.863: 99.8813% ( 1) 00:29:02.336 5991.863 - 6023.070: 99.8832% ( 1) 00:29:02.336 6054.278 - 6085.486: 99.8851% ( 1) 00:29:02.336 6085.486 - 6116.693: 99.8870% ( 1) 00:29:02.336 6116.693 - 6147.901: 99.8889% ( 1) 00:29:02.336 6147.901 - 6179.109: 99.8909% ( 1) 00:29:02.336 6210.316 - 6241.524: 99.8928% ( 1) 00:29:02.336 6241.524 - 6272.731: 99.8947% ( 1) 00:29:02.336 6272.731 - 6303.939: 99.8966% ( 1) 00:29:02.336 6303.939 - 6335.147: 99.8985% ( 1) 00:29:02.336 6366.354 - 6397.562: 99.9004% ( 1) 00:29:02.336 6397.562 - 6428.770: 99.9023% ( 1) 00:29:02.336 6428.770 - 6459.977: 99.9043% ( 1) 00:29:02.336 6491.185 - 6522.392: 99.9062% ( 1) 00:29:02.336 6522.392 - 6553.600: 99.9081% ( 1) 00:29:02.336 6553.600 - 6584.808: 99.9100% ( 1) 00:29:02.336 6584.808 - 6616.015: 99.9119% ( 1) 00:29:02.336 6616.015 - 6647.223: 99.9138% ( 1) 00:29:02.336 6647.223 - 6678.430: 99.9157% ( 1) 00:29:02.336 6709.638 - 6740.846: 99.9177% ( 1) 00:29:02.336 6740.846 - 6772.053: 99.9196% ( 1) 00:29:02.336 6803.261 - 6834.469: 99.9215% ( 1) 00:29:02.336 6834.469 - 6865.676: 99.9234% ( 1) 00:29:02.336 6865.676 - 6896.884: 99.9253% ( 1) 00:29:02.336 6896.884 - 6928.091: 99.9272% ( 1) 00:29:02.336 6959.299 - 6990.507: 99.9292% ( 1) 00:29:02.336 6990.507 - 7021.714: 99.9311% ( 1) 00:29:02.336 7021.714 - 7052.922: 99.9330% ( 1) 00:29:02.336 7052.922 - 7084.130: 99.9349% ( 1) 00:29:02.336 7084.130 - 7115.337: 99.9368% ( 1) 00:29:02.336 7115.337 - 7146.545: 99.9387% ( 1) 00:29:02.336 7146.545 - 7177.752: 99.9406% ( 1) 00:29:02.336 7177.752 - 7208.960: 99.9426% ( 1) 00:29:02.336 7240.168 - 7271.375: 99.9445% ( 1) 00:29:02.336 7271.375 - 7302.583: 99.9464% ( 1) 00:29:02.336 7302.583 - 7333.790: 99.9483% ( 1) 00:29:02.336 7333.790 - 7364.998: 99.9502% ( 1) 00:29:02.336 7364.998 - 7396.206: 99.9521% ( 1) 00:29:02.337 7396.206 - 7427.413: 99.9540% ( 1) 00:29:02.337 7458.621 - 7489.829: 99.9560% ( 1) 00:29:02.337 7489.829 - 7521.036: 99.9579% ( 1) 00:29:02.337 7521.036 - 7552.244: 99.9598% ( 1) 00:29:02.337 7552.244 - 7583.451: 99.9617% ( 1) 00:29:02.337 7614.659 - 7645.867: 99.9636% ( 1) 00:29:02.337 7645.867 - 7677.074: 99.9655% ( 1) 00:29:02.337 7708.282 - 7739.490: 99.9674% ( 1) 00:29:02.337 7739.490 - 7770.697: 99.9694% ( 1) 00:29:02.337 7770.697 - 7801.905: 99.9713% ( 1) 00:29:02.337 7833.112 - 7864.320: 99.9732% ( 1) 00:29:02.337 7864.320 - 7895.528: 99.9751% ( 1) 00:29:02.337 7895.528 - 7926.735: 99.9770% ( 1) 00:29:02.337 7926.735 - 7957.943: 99.9789% ( 1) 00:29:02.337 7989.150 - 8051.566: 99.9828% ( 2) 00:29:02.337 8051.566 - 8113.981: 99.9866% ( 2) 00:29:02.337 8113.981 - 8176.396: 99.9904% ( 2) 00:29:02.337 8176.396 - 8238.811: 99.9923% ( 1) 00:29:02.337 8238.811 - 8301.227: 99.9962% ( 2) 00:29:02.337 8301.227 - 8363.642: 99.9981% ( 1) 00:29:02.337 8363.642 - 8426.057: 100.0000% ( 1) 00:29:02.337 00:29:02.337 06:21:32 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:29:03.711 Initializing NVMe Controllers 00:29:03.711 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:03.711 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:03.711 Initialization complete. Launching workers. 00:29:03.711 ======================================================== 00:29:03.711 Latency(us) 00:29:03.711 Device Information : IOPS MiB/s Average min max 00:29:03.711 PCIE (0000:00:06.0) NSID 1 from core 0: 59752.94 700.23 2142.47 872.33 10358.87 00:29:03.711 ======================================================== 00:29:03.711 Total : 59752.94 700.23 2142.47 872.33 10358.87 00:29:03.711 00:29:03.711 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:03.711 ================================================================================= 00:29:03.711 1.00000% : 1310.720us 00:29:03.711 10.00000% : 1646.202us 00:29:03.711 25.00000% : 1771.032us 00:29:03.711 50.00000% : 1950.476us 00:29:03.711 75.00000% : 2356.175us 00:29:03.711 90.00000% : 2949.120us 00:29:03.711 95.00000% : 3417.234us 00:29:03.711 98.00000% : 3807.330us 00:29:03.711 99.00000% : 3994.575us 00:29:03.711 99.50000% : 4306.651us 00:29:03.711 99.90000% : 6647.223us 00:29:03.711 99.99000% : 10111.269us 00:29:03.711 99.99900% : 10360.930us 00:29:03.711 99.99990% : 10360.930us 00:29:03.711 99.99999% : 10360.930us 00:29:03.711 00:29:03.711 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:29:03.711 ============================================================================== 00:29:03.711 Range in us Cumulative IO count 00:29:03.711 869.912 - 873.813: 0.0017% ( 1) 00:29:03.711 877.714 - 881.615: 0.0117% ( 6) 00:29:03.711 881.615 - 885.516: 0.0134% ( 1) 00:29:03.711 885.516 - 889.417: 0.0167% ( 2) 00:29:03.711 889.417 - 893.318: 0.0201% ( 2) 00:29:03.711 908.922 - 912.823: 0.0218% ( 1) 00:29:03.711 924.526 - 928.427: 0.0234% ( 1) 00:29:03.711 936.229 - 940.130: 0.0251% ( 1) 00:29:03.711 947.931 - 951.832: 0.0335% ( 5) 00:29:03.711 951.832 - 955.733: 0.0502% ( 10) 00:29:03.711 955.733 - 959.634: 0.0536% ( 2) 00:29:03.711 963.535 - 967.436: 0.0552% ( 1) 00:29:03.711 967.436 - 971.337: 0.0586% ( 2) 00:29:03.711 979.139 - 983.040: 0.0602% ( 1) 00:29:03.711 986.941 - 990.842: 0.0619% ( 1) 00:29:03.711 990.842 - 994.743: 0.0653% ( 2) 00:29:03.711 1006.446 - 1014.248: 0.0703% ( 3) 00:29:03.711 1014.248 - 1022.050: 0.1138% ( 26) 00:29:03.711 1022.050 - 1029.851: 0.1222% ( 5) 00:29:03.711 1029.851 - 1037.653: 0.1272% ( 3) 00:29:03.711 1037.653 - 1045.455: 0.1322% ( 3) 00:29:03.711 1045.455 - 1053.257: 0.1423% ( 6) 00:29:03.711 1053.257 - 1061.059: 0.1640% ( 13) 00:29:03.711 1061.059 - 1068.861: 0.2075% ( 26) 00:29:03.711 1068.861 - 1076.663: 0.2393% ( 19) 00:29:03.711 1076.663 - 1084.465: 0.2477% ( 5) 00:29:03.711 1084.465 - 1092.267: 0.2627% ( 9) 00:29:03.711 1092.267 - 1100.069: 0.2929% ( 18) 00:29:03.711 1100.069 - 1107.870: 0.3113% ( 11) 00:29:03.711 1107.870 - 1115.672: 0.3263% ( 9) 00:29:03.711 1115.672 - 1123.474: 0.3330% ( 4) 00:29:03.711 1123.474 - 1131.276: 0.3448% ( 7) 00:29:03.711 1131.276 - 1139.078: 0.3648% ( 12) 00:29:03.711 1139.078 - 1146.880: 0.3782% ( 8) 00:29:03.711 1146.880 - 1154.682: 0.3899% ( 7) 00:29:03.711 1154.682 - 1162.484: 0.4033% ( 8) 00:29:03.711 1162.484 - 1170.286: 0.4150% ( 7) 00:29:03.711 1170.286 - 1178.088: 0.4485% ( 20) 00:29:03.711 1178.088 - 1185.890: 0.4770% ( 17) 00:29:03.711 1185.890 - 1193.691: 0.4853% ( 5) 00:29:03.711 1193.691 - 1201.493: 0.5021% ( 10) 00:29:03.711 1201.493 - 1209.295: 0.5221% ( 12) 00:29:03.711 1209.295 - 1217.097: 0.5355% ( 8) 00:29:03.711 1217.097 - 1224.899: 0.5556% ( 12) 00:29:03.711 1224.899 - 1232.701: 0.6075% ( 31) 00:29:03.711 1232.701 - 1240.503: 0.6443% ( 22) 00:29:03.711 1240.503 - 1248.305: 0.6995% ( 33) 00:29:03.711 1248.305 - 1256.107: 0.8334% ( 80) 00:29:03.711 1256.107 - 1263.909: 0.8619% ( 17) 00:29:03.711 1263.909 - 1271.710: 0.8803% ( 11) 00:29:03.711 1271.710 - 1279.512: 0.8987% ( 11) 00:29:03.711 1279.512 - 1287.314: 0.9171% ( 11) 00:29:03.711 1287.314 - 1295.116: 0.9673% ( 30) 00:29:03.711 1295.116 - 1302.918: 0.9857% ( 11) 00:29:03.711 1302.918 - 1310.720: 1.0075% ( 13) 00:29:03.711 1310.720 - 1318.522: 1.0393% ( 19) 00:29:03.711 1318.522 - 1326.324: 1.0594% ( 12) 00:29:03.711 1326.324 - 1334.126: 1.1012% ( 25) 00:29:03.711 1334.126 - 1341.928: 1.1163% ( 9) 00:29:03.711 1341.928 - 1349.730: 1.1380% ( 13) 00:29:03.711 1349.730 - 1357.531: 1.1899% ( 31) 00:29:03.711 1357.531 - 1365.333: 1.2217% ( 19) 00:29:03.711 1365.333 - 1373.135: 1.2418% ( 12) 00:29:03.711 1373.135 - 1380.937: 1.2886% ( 28) 00:29:03.711 1380.937 - 1388.739: 1.3204% ( 19) 00:29:03.711 1388.739 - 1396.541: 1.3405% ( 12) 00:29:03.711 1396.541 - 1404.343: 1.3556% ( 9) 00:29:03.711 1404.343 - 1412.145: 1.3824% ( 16) 00:29:03.711 1412.145 - 1419.947: 1.4259% ( 26) 00:29:03.711 1419.947 - 1427.749: 1.4711% ( 27) 00:29:03.711 1427.749 - 1435.550: 1.5296% ( 35) 00:29:03.711 1435.550 - 1443.352: 1.6016% ( 43) 00:29:03.711 1443.352 - 1451.154: 1.6953% ( 56) 00:29:03.711 1451.154 - 1458.956: 1.8275% ( 79) 00:29:03.711 1458.956 - 1466.758: 1.9380% ( 66) 00:29:03.711 1466.758 - 1474.560: 2.0802% ( 85) 00:29:03.711 1474.560 - 1482.362: 2.2459% ( 99) 00:29:03.711 1482.362 - 1490.164: 2.4200% ( 104) 00:29:03.711 1490.164 - 1497.966: 2.6074% ( 112) 00:29:03.711 1497.966 - 1505.768: 2.8400% ( 139) 00:29:03.711 1505.768 - 1513.570: 3.0743% ( 140) 00:29:03.711 1513.570 - 1521.371: 3.3488% ( 164) 00:29:03.711 1521.371 - 1529.173: 3.6350% ( 171) 00:29:03.711 1529.173 - 1536.975: 3.9396% ( 182) 00:29:03.711 1536.975 - 1544.777: 4.2910% ( 210) 00:29:03.711 1544.777 - 1552.579: 4.6274% ( 201) 00:29:03.711 1552.579 - 1560.381: 4.9905% ( 217) 00:29:03.711 1560.381 - 1568.183: 5.4106% ( 251) 00:29:03.711 1568.183 - 1575.985: 5.7754% ( 218) 00:29:03.711 1575.985 - 1583.787: 6.2273% ( 270) 00:29:03.711 1583.787 - 1591.589: 6.6674% ( 263) 00:29:03.711 1591.589 - 1599.390: 7.1611% ( 295) 00:29:03.711 1599.390 - 1607.192: 7.6331% ( 282) 00:29:03.711 1607.192 - 1614.994: 8.1167% ( 289) 00:29:03.711 1614.994 - 1622.796: 8.6138% ( 297) 00:29:03.711 1622.796 - 1630.598: 9.0874% ( 283) 00:29:03.711 1630.598 - 1638.400: 9.6012% ( 307) 00:29:03.711 1638.400 - 1646.202: 10.1150% ( 307) 00:29:03.711 1646.202 - 1654.004: 10.6522% ( 321) 00:29:03.711 1654.004 - 1661.806: 11.3618% ( 424) 00:29:03.711 1661.806 - 1669.608: 11.9375% ( 344) 00:29:03.712 1669.608 - 1677.410: 12.5868% ( 388) 00:29:03.712 1677.410 - 1685.211: 13.3332% ( 446) 00:29:03.712 1685.211 - 1693.013: 14.2888% ( 571) 00:29:03.712 1693.013 - 1700.815: 15.0218% ( 438) 00:29:03.712 1700.815 - 1708.617: 15.9724% ( 568) 00:29:03.712 1708.617 - 1716.419: 17.1623% ( 711) 00:29:03.712 1716.419 - 1724.221: 18.5363% ( 821) 00:29:03.712 1724.221 - 1732.023: 19.7614% ( 732) 00:29:03.712 1732.023 - 1739.825: 20.7437% ( 587) 00:29:03.712 1739.825 - 1747.627: 21.7947% ( 628) 00:29:03.712 1747.627 - 1755.429: 23.0767% ( 766) 00:29:03.712 1755.429 - 1763.230: 24.3235% ( 745) 00:29:03.712 1763.230 - 1771.032: 25.3393% ( 607) 00:29:03.712 1771.032 - 1778.834: 26.2414% ( 539) 00:29:03.712 1778.834 - 1786.636: 27.3643% ( 671) 00:29:03.712 1786.636 - 1794.438: 28.9140% ( 926) 00:29:03.712 1794.438 - 1802.240: 29.8027% ( 531) 00:29:03.712 1802.240 - 1810.042: 31.0076% ( 720) 00:29:03.712 1810.042 - 1817.844: 32.6143% ( 960) 00:29:03.712 1817.844 - 1825.646: 33.6067% ( 593) 00:29:03.712 1825.646 - 1833.448: 34.5305% ( 552) 00:29:03.712 1833.448 - 1841.250: 35.4543% ( 552) 00:29:03.712 1841.250 - 1849.051: 36.5086% ( 630) 00:29:03.712 1849.051 - 1856.853: 37.3554% ( 506) 00:29:03.712 1856.853 - 1864.655: 38.3311% ( 583) 00:29:03.712 1864.655 - 1872.457: 39.3553% ( 612) 00:29:03.712 1872.457 - 1880.259: 40.4147% ( 633) 00:29:03.712 1880.259 - 1888.061: 41.6883% ( 761) 00:29:03.712 1888.061 - 1895.863: 42.8782% ( 711) 00:29:03.712 1895.863 - 1903.665: 44.1852% ( 781) 00:29:03.712 1903.665 - 1911.467: 45.3500% ( 696) 00:29:03.712 1911.467 - 1919.269: 46.4730% ( 671) 00:29:03.712 1919.269 - 1927.070: 47.5708% ( 656) 00:29:03.712 1927.070 - 1934.872: 48.6904% ( 669) 00:29:03.712 1934.872 - 1942.674: 49.7632% ( 641) 00:29:03.712 1942.674 - 1950.476: 50.7355% ( 581) 00:29:03.712 1950.476 - 1958.278: 51.4585% ( 432) 00:29:03.712 1958.278 - 1966.080: 52.2350% ( 464) 00:29:03.712 1966.080 - 1973.882: 53.2090% ( 582) 00:29:03.712 1973.882 - 1981.684: 54.2048% ( 595) 00:29:03.712 1981.684 - 1989.486: 55.1487% ( 564) 00:29:03.712 1989.486 - 1997.288: 56.0240% ( 523) 00:29:03.712 1997.288 - 2012.891: 57.3394% ( 786) 00:29:03.712 2012.891 - 2028.495: 58.6849% ( 804) 00:29:03.712 2028.495 - 2044.099: 60.1643% ( 884) 00:29:03.712 2044.099 - 2059.703: 61.4998% ( 798) 00:29:03.712 2059.703 - 2075.307: 62.4621% ( 575) 00:29:03.712 2075.307 - 2090.910: 63.5064% ( 624) 00:29:03.712 2090.910 - 2106.514: 64.5256% ( 609) 00:29:03.712 2106.514 - 2122.118: 65.4444% ( 549) 00:29:03.712 2122.118 - 2137.722: 66.2745% ( 496) 00:29:03.712 2137.722 - 2153.326: 67.0125% ( 441) 00:29:03.712 2153.326 - 2168.930: 67.7004% ( 411) 00:29:03.712 2168.930 - 2184.533: 68.4033% ( 420) 00:29:03.712 2184.533 - 2200.137: 69.0710% ( 399) 00:29:03.712 2200.137 - 2215.741: 69.6718% ( 359) 00:29:03.712 2215.741 - 2231.345: 70.2810% ( 364) 00:29:03.712 2231.345 - 2246.949: 70.8149% ( 319) 00:29:03.712 2246.949 - 2262.552: 71.3805% ( 338) 00:29:03.712 2262.552 - 2278.156: 71.9445% ( 337) 00:29:03.712 2278.156 - 2293.760: 72.5503% ( 362) 00:29:03.712 2293.760 - 2309.364: 73.1712% ( 371) 00:29:03.712 2309.364 - 2324.968: 73.7904% ( 370) 00:29:03.712 2324.968 - 2340.571: 74.3377% ( 327) 00:29:03.712 2340.571 - 2356.175: 75.0188% ( 407) 00:29:03.712 2356.175 - 2371.779: 75.5962% ( 345) 00:29:03.712 2371.779 - 2387.383: 76.1686% ( 342) 00:29:03.712 2387.383 - 2402.987: 76.7192% ( 329) 00:29:03.712 2402.987 - 2418.590: 77.2547% ( 320) 00:29:03.712 2418.590 - 2434.194: 77.7852% ( 317) 00:29:03.712 2434.194 - 2449.798: 78.3141% ( 316) 00:29:03.712 2449.798 - 2465.402: 78.8463% ( 318) 00:29:03.712 2465.402 - 2481.006: 79.3600% ( 307) 00:29:03.712 2481.006 - 2496.610: 79.8521% ( 294) 00:29:03.712 2496.610 - 2512.213: 80.3759% ( 313) 00:29:03.712 2512.213 - 2527.817: 80.8897% ( 307) 00:29:03.712 2527.817 - 2543.421: 81.3850% ( 296) 00:29:03.712 2543.421 - 2559.025: 81.8536% ( 280) 00:29:03.712 2559.025 - 2574.629: 82.3356% ( 288) 00:29:03.712 2574.629 - 2590.232: 82.8076% ( 282) 00:29:03.712 2590.232 - 2605.836: 83.3766% ( 340) 00:29:03.712 2605.836 - 2621.440: 83.8820% ( 302) 00:29:03.712 2621.440 - 2637.044: 84.3171% ( 260) 00:29:03.712 2637.044 - 2652.648: 84.7372% ( 251) 00:29:03.712 2652.648 - 2668.251: 85.0535% ( 189) 00:29:03.712 2668.251 - 2683.855: 85.4267% ( 223) 00:29:03.712 2683.855 - 2699.459: 85.7714% ( 206) 00:29:03.712 2699.459 - 2715.063: 86.1245% ( 211) 00:29:03.712 2715.063 - 2730.667: 86.4559% ( 198) 00:29:03.712 2730.667 - 2746.270: 86.7739% ( 190) 00:29:03.712 2746.270 - 2761.874: 87.0366% ( 157) 00:29:03.712 2761.874 - 2777.478: 87.3228% ( 171) 00:29:03.712 2777.478 - 2793.082: 87.6241% ( 180) 00:29:03.712 2793.082 - 2808.686: 87.9119% ( 172) 00:29:03.712 2808.686 - 2824.290: 88.1763% ( 158) 00:29:03.712 2824.290 - 2839.893: 88.4039% ( 136) 00:29:03.712 2839.893 - 2855.497: 88.6550% ( 150) 00:29:03.712 2855.497 - 2871.101: 88.8775% ( 133) 00:29:03.712 2871.101 - 2886.705: 89.1185% ( 144) 00:29:03.712 2886.705 - 2902.309: 89.3662% ( 148) 00:29:03.712 2902.309 - 2917.912: 89.6072% ( 144) 00:29:03.712 2917.912 - 2933.516: 89.8198% ( 127) 00:29:03.712 2933.516 - 2949.120: 90.0323% ( 127) 00:29:03.712 2949.120 - 2964.724: 90.2298% ( 118) 00:29:03.712 2964.724 - 2980.328: 90.4239% ( 116) 00:29:03.712 2980.328 - 2995.931: 90.6130% ( 113) 00:29:03.712 2995.931 - 3011.535: 90.7971% ( 110) 00:29:03.712 3011.535 - 3027.139: 90.9929% ( 117) 00:29:03.712 3027.139 - 3042.743: 91.1770% ( 110) 00:29:03.712 3042.743 - 3058.347: 91.3728% ( 117) 00:29:03.712 3058.347 - 3073.950: 91.5603% ( 112) 00:29:03.712 3073.950 - 3089.554: 91.7209% ( 96) 00:29:03.712 3089.554 - 3105.158: 91.8966% ( 105) 00:29:03.712 3105.158 - 3120.762: 92.0623% ( 99) 00:29:03.712 3120.762 - 3136.366: 92.2230% ( 96) 00:29:03.712 3136.366 - 3151.970: 92.3853% ( 97) 00:29:03.712 3151.970 - 3167.573: 92.5627% ( 106) 00:29:03.712 3167.573 - 3183.177: 92.7267% ( 98) 00:29:03.712 3183.177 - 3198.781: 92.8807% ( 92) 00:29:03.712 3198.781 - 3214.385: 93.0397% ( 95) 00:29:03.712 3214.385 - 3229.989: 93.1936% ( 92) 00:29:03.712 3229.989 - 3245.592: 93.3526% ( 95) 00:29:03.712 3245.592 - 3261.196: 93.5133% ( 96) 00:29:03.712 3261.196 - 3276.800: 93.6689% ( 93) 00:29:03.712 3276.800 - 3292.404: 93.8396% ( 102) 00:29:03.712 3292.404 - 3308.008: 94.0036% ( 98) 00:29:03.712 3308.008 - 3323.611: 94.1610% ( 94) 00:29:03.712 3323.611 - 3339.215: 94.3066% ( 87) 00:29:03.712 3339.215 - 3354.819: 94.4572% ( 90) 00:29:03.712 3354.819 - 3370.423: 94.6028% ( 87) 00:29:03.712 3370.423 - 3386.027: 94.7484% ( 87) 00:29:03.712 3386.027 - 3401.630: 94.8906% ( 85) 00:29:03.712 3401.630 - 3417.234: 95.0396% ( 89) 00:29:03.712 3417.234 - 3432.838: 95.1735% ( 80) 00:29:03.712 3432.838 - 3448.442: 95.3308% ( 94) 00:29:03.712 3448.442 - 3464.046: 95.4680% ( 82) 00:29:03.712 3464.046 - 3479.650: 95.6052% ( 82) 00:29:03.712 3479.650 - 3495.253: 95.7391% ( 80) 00:29:03.712 3495.253 - 3510.857: 95.8646% ( 75) 00:29:03.712 3510.857 - 3526.461: 95.9851% ( 72) 00:29:03.712 3526.461 - 3542.065: 96.1173% ( 79) 00:29:03.712 3542.065 - 3557.669: 96.2412% ( 74) 00:29:03.712 3557.669 - 3573.272: 96.3684% ( 76) 00:29:03.712 3573.272 - 3588.876: 96.4989% ( 78) 00:29:03.712 3588.876 - 3604.480: 96.6127% ( 68) 00:29:03.712 3604.480 - 3620.084: 96.7483% ( 81) 00:29:03.712 3620.084 - 3635.688: 96.8638% ( 69) 00:29:03.712 3635.688 - 3651.291: 96.9843% ( 72) 00:29:03.712 3651.291 - 3666.895: 97.0947% ( 66) 00:29:03.712 3666.895 - 3682.499: 97.2102% ( 69) 00:29:03.712 3682.499 - 3698.103: 97.3206% ( 66) 00:29:03.712 3698.103 - 3713.707: 97.4227% ( 61) 00:29:03.712 3713.707 - 3729.310: 97.5315% ( 65) 00:29:03.712 3729.310 - 3744.914: 97.6386% ( 64) 00:29:03.712 3744.914 - 3760.518: 97.7424% ( 62) 00:29:03.712 3760.518 - 3776.122: 97.8562% ( 68) 00:29:03.712 3776.122 - 3791.726: 97.9583% ( 61) 00:29:03.712 3791.726 - 3807.330: 98.0620% ( 62) 00:29:03.712 3807.330 - 3822.933: 98.1608% ( 59) 00:29:03.712 3822.933 - 3838.537: 98.2645% ( 62) 00:29:03.712 3838.537 - 3854.141: 98.3633% ( 59) 00:29:03.712 3854.141 - 3869.745: 98.4603% ( 58) 00:29:03.712 3869.745 - 3885.349: 98.5457% ( 51) 00:29:03.712 3885.349 - 3900.952: 98.6210% ( 45) 00:29:03.712 3900.952 - 3916.556: 98.7063% ( 51) 00:29:03.712 3916.556 - 3932.160: 98.7766% ( 42) 00:29:03.712 3932.160 - 3947.764: 98.8452% ( 41) 00:29:03.712 3947.764 - 3963.368: 98.9155% ( 42) 00:29:03.712 3963.368 - 3978.971: 98.9825% ( 40) 00:29:03.712 3978.971 - 3994.575: 99.0360% ( 32) 00:29:03.712 3994.575 - 4025.783: 99.1431% ( 64) 00:29:03.712 4025.783 - 4056.990: 99.2268% ( 50) 00:29:03.712 4056.990 - 4088.198: 99.2988% ( 43) 00:29:03.712 4088.198 - 4119.406: 99.3406% ( 25) 00:29:03.712 4119.406 - 4150.613: 99.3975% ( 34) 00:29:03.712 4150.613 - 4181.821: 99.4260% ( 17) 00:29:03.712 4181.821 - 4213.029: 99.4544% ( 17) 00:29:03.712 4213.029 - 4244.236: 99.4745% ( 12) 00:29:03.712 4244.236 - 4275.444: 99.4929% ( 11) 00:29:03.712 4275.444 - 4306.651: 99.5163% ( 14) 00:29:03.712 4306.651 - 4337.859: 99.5381% ( 13) 00:29:03.712 4337.859 - 4369.067: 99.5548% ( 10) 00:29:03.712 4369.067 - 4400.274: 99.5682% ( 8) 00:29:03.712 4400.274 - 4431.482: 99.5783% ( 6) 00:29:03.712 4431.482 - 4462.690: 99.5883% ( 6) 00:29:03.712 4462.690 - 4493.897: 99.6017% ( 8) 00:29:03.712 4493.897 - 4525.105: 99.6117% ( 6) 00:29:03.712 4525.105 - 4556.312: 99.6234% ( 7) 00:29:03.712 4556.312 - 4587.520: 99.6352% ( 7) 00:29:03.713 4587.520 - 4618.728: 99.6486% ( 8) 00:29:03.713 4618.728 - 4649.935: 99.6603% ( 7) 00:29:03.713 4649.935 - 4681.143: 99.6737% ( 8) 00:29:03.713 4681.143 - 4712.350: 99.6837% ( 6) 00:29:03.713 4712.350 - 4743.558: 99.6954% ( 7) 00:29:03.713 4743.558 - 4774.766: 99.7088% ( 8) 00:29:03.713 4774.766 - 4805.973: 99.7222% ( 8) 00:29:03.713 4805.973 - 4837.181: 99.7356% ( 8) 00:29:03.713 4837.181 - 4868.389: 99.7456% ( 6) 00:29:03.713 4868.389 - 4899.596: 99.7590% ( 8) 00:29:03.713 4899.596 - 4930.804: 99.7707% ( 7) 00:29:03.713 4930.804 - 4962.011: 99.7824% ( 7) 00:29:03.713 4962.011 - 4993.219: 99.7891% ( 4) 00:29:03.713 4993.219 - 5024.427: 99.7942% ( 3) 00:29:03.713 5024.427 - 5055.634: 99.7992% ( 3) 00:29:03.713 5055.634 - 5086.842: 99.8059% ( 4) 00:29:03.713 5086.842 - 5118.050: 99.8075% ( 1) 00:29:03.713 5118.050 - 5149.257: 99.8092% ( 1) 00:29:03.713 5149.257 - 5180.465: 99.8109% ( 1) 00:29:03.713 5180.465 - 5211.672: 99.8126% ( 1) 00:29:03.713 5211.672 - 5242.880: 99.8142% ( 1) 00:29:03.713 5242.880 - 5274.088: 99.8159% ( 1) 00:29:03.713 5336.503 - 5367.710: 99.8176% ( 1) 00:29:03.713 5367.710 - 5398.918: 99.8193% ( 1) 00:29:03.713 5398.918 - 5430.126: 99.8226% ( 2) 00:29:03.713 5430.126 - 5461.333: 99.8243% ( 1) 00:29:03.713 5461.333 - 5492.541: 99.8260% ( 1) 00:29:03.713 5492.541 - 5523.749: 99.8276% ( 1) 00:29:03.713 5523.749 - 5554.956: 99.8310% ( 2) 00:29:03.713 5554.956 - 5586.164: 99.8326% ( 1) 00:29:03.713 5586.164 - 5617.371: 99.8343% ( 1) 00:29:03.713 5617.371 - 5648.579: 99.8360% ( 1) 00:29:03.713 5648.579 - 5679.787: 99.8377% ( 1) 00:29:03.713 5679.787 - 5710.994: 99.8410% ( 2) 00:29:03.713 5710.994 - 5742.202: 99.8427% ( 1) 00:29:03.713 5742.202 - 5773.410: 99.8444% ( 1) 00:29:03.713 5773.410 - 5804.617: 99.8460% ( 1) 00:29:03.713 5804.617 - 5835.825: 99.8494% ( 2) 00:29:03.713 5835.825 - 5867.032: 99.8511% ( 1) 00:29:03.713 5867.032 - 5898.240: 99.8527% ( 1) 00:29:03.713 5898.240 - 5929.448: 99.8544% ( 1) 00:29:03.713 5929.448 - 5960.655: 99.8561% ( 1) 00:29:03.713 5960.655 - 5991.863: 99.8594% ( 2) 00:29:03.713 5991.863 - 6023.070: 99.8611% ( 1) 00:29:03.713 6023.070 - 6054.278: 99.8628% ( 1) 00:29:03.713 6054.278 - 6085.486: 99.8644% ( 1) 00:29:03.713 6085.486 - 6116.693: 99.8678% ( 2) 00:29:03.713 6116.693 - 6147.901: 99.8695% ( 1) 00:29:03.713 6147.901 - 6179.109: 99.8711% ( 1) 00:29:03.713 6179.109 - 6210.316: 99.8728% ( 1) 00:29:03.713 6210.316 - 6241.524: 99.8745% ( 1) 00:29:03.713 6241.524 - 6272.731: 99.8778% ( 2) 00:29:03.713 6303.939 - 6335.147: 99.8812% ( 2) 00:29:03.713 6335.147 - 6366.354: 99.8829% ( 1) 00:29:03.713 6366.354 - 6397.562: 99.8845% ( 1) 00:29:03.713 6397.562 - 6428.770: 99.8879% ( 2) 00:29:03.713 6428.770 - 6459.977: 99.8895% ( 1) 00:29:03.713 6459.977 - 6491.185: 99.8912% ( 1) 00:29:03.713 6491.185 - 6522.392: 99.8929% ( 1) 00:29:03.713 6522.392 - 6553.600: 99.8946% ( 1) 00:29:03.713 6553.600 - 6584.808: 99.8962% ( 1) 00:29:03.713 6584.808 - 6616.015: 99.8996% ( 2) 00:29:03.713 6616.015 - 6647.223: 99.9013% ( 1) 00:29:03.713 6647.223 - 6678.430: 99.9029% ( 1) 00:29:03.713 6678.430 - 6709.638: 99.9063% ( 2) 00:29:03.713 6709.638 - 6740.846: 99.9080% ( 1) 00:29:03.713 6740.846 - 6772.053: 99.9096% ( 1) 00:29:03.713 6772.053 - 6803.261: 99.9130% ( 2) 00:29:03.713 6803.261 - 6834.469: 99.9146% ( 1) 00:29:03.713 6834.469 - 6865.676: 99.9163% ( 1) 00:29:03.713 6865.676 - 6896.884: 99.9180% ( 1) 00:29:03.713 6896.884 - 6928.091: 99.9213% ( 2) 00:29:03.713 6928.091 - 6959.299: 99.9230% ( 1) 00:29:03.713 6959.299 - 6990.507: 99.9247% ( 1) 00:29:03.713 6990.507 - 7021.714: 99.9280% ( 2) 00:29:03.713 7021.714 - 7052.922: 99.9297% ( 1) 00:29:03.713 7052.922 - 7084.130: 99.9331% ( 2) 00:29:03.713 7084.130 - 7115.337: 99.9347% ( 1) 00:29:03.713 7115.337 - 7146.545: 99.9364% ( 1) 00:29:03.713 7146.545 - 7177.752: 99.9398% ( 2) 00:29:03.713 7177.752 - 7208.960: 99.9414% ( 1) 00:29:03.713 7208.960 - 7240.168: 99.9431% ( 1) 00:29:03.713 7271.375 - 7302.583: 99.9448% ( 1) 00:29:03.713 9299.870 - 9362.286: 99.9498% ( 3) 00:29:03.713 9362.286 - 9424.701: 99.9565% ( 4) 00:29:03.713 9424.701 - 9487.116: 99.9615% ( 3) 00:29:03.713 9549.531 - 9611.947: 99.9632% ( 1) 00:29:03.713 9674.362 - 9736.777: 99.9649% ( 1) 00:29:03.713 9736.777 - 9799.192: 99.9715% ( 4) 00:29:03.713 9799.192 - 9861.608: 99.9749% ( 2) 00:29:03.713 9924.023 - 9986.438: 99.9782% ( 2) 00:29:03.713 9986.438 - 10048.853: 99.9849% ( 4) 00:29:03.713 10048.853 - 10111.269: 99.9916% ( 4) 00:29:03.713 10173.684 - 10236.099: 99.9933% ( 1) 00:29:03.713 10236.099 - 10298.514: 99.9967% ( 2) 00:29:03.713 10298.514 - 10360.930: 100.0000% ( 2) 00:29:03.713 00:29:03.713 06:21:34 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:29:03.713 00:29:03.713 real 0m2.793s 00:29:03.713 user 0m2.267s 00:29:03.713 sys 0m0.374s 00:29:03.713 06:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.713 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:03.713 ************************************ 00:29:03.713 END TEST nvme_perf 00:29:03.713 ************************************ 00:29:03.713 06:21:34 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:03.713 06:21:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:03.713 06:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:03.713 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:03.713 ************************************ 00:29:03.713 START TEST nvme_hello_world 00:29:03.713 ************************************ 00:29:03.713 06:21:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:29:04.279 Initializing NVMe Controllers 00:29:04.279 Attached to 0000:00:06.0 00:29:04.279 Namespace ID: 1 size: 5GB 00:29:04.279 Initialization complete. 00:29:04.279 INFO: using host memory buffer for IO 00:29:04.279 Hello world! 00:29:04.279 00:29:04.279 real 0m0.384s 00:29:04.279 user 0m0.132s 00:29:04.279 sys 0m0.162s 00:29:04.279 06:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.279 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:04.279 ************************************ 00:29:04.279 END TEST nvme_hello_world 00:29:04.279 ************************************ 00:29:04.279 06:21:34 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:04.279 06:21:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.279 06:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.279 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:29:04.279 ************************************ 00:29:04.279 START TEST nvme_sgl 00:29:04.279 ************************************ 00:29:04.279 06:21:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:29:04.538 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:29:04.538 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:29:04.538 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:29:04.538 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:29:04.538 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:29:04.538 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:29:04.538 NVMe Readv/Writev Request test 00:29:04.538 Attached to 0000:00:06.0 00:29:04.538 0000:00:06.0: build_io_request_2 test passed 00:29:04.538 0000:00:06.0: build_io_request_4 test passed 00:29:04.538 0000:00:06.0: build_io_request_5 test passed 00:29:04.538 0000:00:06.0: build_io_request_6 test passed 00:29:04.538 0000:00:06.0: build_io_request_7 test passed 00:29:04.538 0000:00:06.0: build_io_request_10 test passed 00:29:04.538 Cleaning up... 00:29:04.538 00:29:04.538 real 0m0.446s 00:29:04.538 user 0m0.227s 00:29:04.538 sys 0m0.147s 00:29:04.538 06:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.538 ************************************ 00:29:04.538 END TEST nvme_sgl 00:29:04.538 ************************************ 00:29:04.538 06:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:04.797 06:21:35 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:04.797 06:21:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.797 06:21:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.797 06:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:04.797 ************************************ 00:29:04.797 START TEST nvme_e2edp 00:29:04.797 ************************************ 00:29:04.797 06:21:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:29:05.055 NVMe Write/Read with End-to-End data protection test 00:29:05.055 Attached to 0000:00:06.0 00:29:05.055 Cleaning up... 00:29:05.055 00:29:05.055 real 0m0.333s 00:29:05.055 user 0m0.125s 00:29:05.055 sys 0m0.157s 00:29:05.055 06:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.055 06:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.055 ************************************ 00:29:05.055 END TEST nvme_e2edp 00:29:05.055 ************************************ 00:29:05.055 06:21:35 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:05.055 06:21:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.055 06:21:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.055 06:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.055 ************************************ 00:29:05.055 START TEST nvme_reserve 00:29:05.055 ************************************ 00:29:05.055 06:21:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:29:05.313 ===================================================== 00:29:05.313 NVMe Controller at PCI bus 0, device 6, function 0 00:29:05.313 ===================================================== 00:29:05.313 Reservations: Not Supported 00:29:05.313 Reservation test passed 00:29:05.313 00:29:05.313 real 0m0.345s 00:29:05.313 user 0m0.125s 00:29:05.313 sys 0m0.136s 00:29:05.313 06:21:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.313 06:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.313 ************************************ 00:29:05.313 END TEST nvme_reserve 00:29:05.313 ************************************ 00:29:05.571 06:21:35 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:05.571 06:21:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.571 06:21:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.571 06:21:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.571 ************************************ 00:29:05.571 START TEST nvme_err_injection 00:29:05.571 ************************************ 00:29:05.571 06:21:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:29:05.828 NVMe Error Injection test 00:29:05.828 Attached to 0000:00:06.0 00:29:05.828 0000:00:06.0: get features failed as expected 00:29:05.828 0000:00:06.0: get features successfully as expected 00:29:05.828 0000:00:06.0: read failed as expected 00:29:05.828 0000:00:06.0: read successfully as expected 00:29:05.828 Cleaning up... 00:29:05.828 00:29:05.828 real 0m0.375s 00:29:05.828 user 0m0.137s 00:29:05.828 sys 0m0.161s 00:29:05.828 06:21:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.828 06:21:36 -- common/autotest_common.sh@10 -- # set +x 00:29:05.828 ************************************ 00:29:05.828 END TEST nvme_err_injection 00:29:05.828 ************************************ 00:29:05.828 06:21:36 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:05.828 06:21:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:29:05.828 06:21:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.828 06:21:36 -- common/autotest_common.sh@10 -- # set +x 00:29:05.828 ************************************ 00:29:05.828 START TEST nvme_overhead 00:29:05.828 ************************************ 00:29:05.828 06:21:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:29:07.206 Initializing NVMe Controllers 00:29:07.206 Attached to 0000:00:06.0 00:29:07.206 Initialization complete. Launching workers. 00:29:07.206 submit (in ns) avg, min, max = 13415.6, 11287.6, 53521.9 00:29:07.206 complete (in ns) avg, min, max = 8103.1, 7741.9, 77928.6 00:29:07.206 00:29:07.206 Submit histogram 00:29:07.206 ================ 00:29:07.206 Range in us Cumulative Count 00:29:07.206 11.276 - 11.337: 0.0124% ( 1) 00:29:07.206 11.825 - 11.886: 0.0249% ( 1) 00:29:07.206 11.886 - 11.947: 0.0373% ( 1) 00:29:07.206 12.069 - 12.130: 0.0498% ( 1) 00:29:07.206 12.312 - 12.373: 0.0622% ( 1) 00:29:07.206 12.373 - 12.434: 0.0747% ( 1) 00:29:07.206 12.617 - 12.678: 0.0871% ( 1) 00:29:07.206 12.678 - 12.739: 0.0995% ( 1) 00:29:07.206 12.739 - 12.800: 0.2613% ( 13) 00:29:07.206 12.800 - 12.861: 0.9332% ( 54) 00:29:07.206 12.861 - 12.922: 2.6378% ( 137) 00:29:07.206 12.922 - 12.983: 6.7189% ( 328) 00:29:07.206 12.983 - 13.044: 13.6120% ( 554) 00:29:07.206 13.044 - 13.105: 24.1881% ( 850) 00:29:07.206 13.105 - 13.166: 37.0287% ( 1032) 00:29:07.206 13.166 - 13.227: 49.6703% ( 1016) 00:29:07.206 13.227 - 13.288: 61.2542% ( 931) 00:29:07.206 13.288 - 13.349: 69.8395% ( 690) 00:29:07.206 13.349 - 13.410: 76.1727% ( 509) 00:29:07.206 13.410 - 13.470: 81.2492% ( 408) 00:29:07.206 13.470 - 13.531: 85.1935% ( 317) 00:29:07.206 13.531 - 13.592: 88.0304% ( 228) 00:29:07.206 13.592 - 13.653: 90.1580% ( 171) 00:29:07.206 13.653 - 13.714: 91.4769% ( 106) 00:29:07.206 13.714 - 13.775: 92.1861% ( 57) 00:29:07.206 13.775 - 13.836: 92.5719% ( 31) 00:29:07.206 13.836 - 13.897: 92.8829% ( 25) 00:29:07.206 13.897 - 13.958: 93.3806% ( 40) 00:29:07.206 13.958 - 14.019: 93.9281% ( 44) 00:29:07.206 14.019 - 14.080: 94.5129% ( 47) 00:29:07.206 14.080 - 14.141: 95.1350% ( 50) 00:29:07.206 14.141 - 14.202: 95.6203% ( 39) 00:29:07.206 14.202 - 14.263: 96.1428% ( 42) 00:29:07.206 14.263 - 14.324: 96.4912% ( 28) 00:29:07.206 14.324 - 14.385: 96.7525% ( 21) 00:29:07.206 14.385 - 14.446: 96.9640% ( 17) 00:29:07.206 14.446 - 14.507: 97.1009% ( 11) 00:29:07.206 14.507 - 14.568: 97.2627% ( 13) 00:29:07.206 14.568 - 14.629: 97.3498% ( 7) 00:29:07.206 14.629 - 14.690: 97.4991% ( 12) 00:29:07.206 14.690 - 14.750: 97.6110% ( 9) 00:29:07.206 14.750 - 14.811: 97.6484% ( 3) 00:29:07.206 14.811 - 14.872: 97.6608% ( 1) 00:29:07.206 14.872 - 14.933: 97.6733% ( 1) 00:29:07.206 14.933 - 14.994: 97.6981% ( 2) 00:29:07.206 14.994 - 15.055: 97.7230% ( 2) 00:29:07.206 15.055 - 15.116: 97.8599% ( 11) 00:29:07.206 15.116 - 15.177: 97.9594% ( 8) 00:29:07.206 15.177 - 15.238: 98.0839% ( 10) 00:29:07.206 15.238 - 15.299: 98.1336% ( 4) 00:29:07.206 15.299 - 15.360: 98.1834% ( 4) 00:29:07.207 15.360 - 15.421: 98.2456% ( 5) 00:29:07.207 15.421 - 15.482: 98.2829% ( 3) 00:29:07.207 15.543 - 15.604: 98.2954% ( 1) 00:29:07.207 15.604 - 15.726: 98.3327% ( 3) 00:29:07.207 15.726 - 15.848: 98.3452% ( 1) 00:29:07.207 15.848 - 15.970: 98.3825% ( 3) 00:29:07.207 16.091 - 16.213: 98.3949% ( 1) 00:29:07.207 16.213 - 16.335: 98.4074% ( 1) 00:29:07.207 16.335 - 16.457: 98.4571% ( 4) 00:29:07.207 16.457 - 16.579: 98.4696% ( 1) 00:29:07.207 16.579 - 16.701: 98.4820% ( 1) 00:29:07.207 16.701 - 16.823: 98.4945% ( 1) 00:29:07.207 16.823 - 16.945: 98.5193% ( 2) 00:29:07.207 16.945 - 17.067: 98.5567% ( 3) 00:29:07.207 17.067 - 17.189: 98.6064% ( 4) 00:29:07.207 17.189 - 17.310: 98.6189% ( 1) 00:29:07.207 17.310 - 17.432: 98.6687% ( 4) 00:29:07.207 17.432 - 17.554: 98.6811% ( 1) 00:29:07.207 17.554 - 17.676: 98.7309% ( 4) 00:29:07.207 17.676 - 17.798: 98.7682% ( 3) 00:29:07.207 17.798 - 17.920: 98.7931% ( 2) 00:29:07.207 17.920 - 18.042: 98.8553% ( 5) 00:29:07.207 18.042 - 18.164: 98.8926% ( 3) 00:29:07.207 18.164 - 18.286: 98.9299% ( 3) 00:29:07.207 18.286 - 18.408: 98.9673% ( 3) 00:29:07.207 18.408 - 18.530: 98.9922% ( 2) 00:29:07.207 18.530 - 18.651: 99.0295% ( 3) 00:29:07.207 18.651 - 18.773: 99.0544% ( 2) 00:29:07.207 18.773 - 18.895: 99.1041% ( 4) 00:29:07.207 18.895 - 19.017: 99.1664% ( 5) 00:29:07.207 19.017 - 19.139: 99.2037% ( 3) 00:29:07.207 19.139 - 19.261: 99.2410% ( 3) 00:29:07.207 19.261 - 19.383: 99.3281% ( 7) 00:29:07.207 19.383 - 19.505: 99.3654% ( 3) 00:29:07.207 19.505 - 19.627: 99.3903% ( 2) 00:29:07.207 19.627 - 19.749: 99.4276% ( 3) 00:29:07.207 19.749 - 19.870: 99.4525% ( 2) 00:29:07.207 19.870 - 19.992: 99.4774% ( 2) 00:29:07.207 19.992 - 20.114: 99.5023% ( 2) 00:29:07.207 20.114 - 20.236: 99.5147% ( 1) 00:29:07.207 20.236 - 20.358: 99.5272% ( 1) 00:29:07.207 20.358 - 20.480: 99.5396% ( 1) 00:29:07.207 20.602 - 20.724: 99.5645% ( 2) 00:29:07.207 20.724 - 20.846: 99.5770% ( 1) 00:29:07.207 20.846 - 20.968: 99.6018% ( 2) 00:29:07.207 21.090 - 21.211: 99.6267% ( 2) 00:29:07.207 21.699 - 21.821: 99.6392% ( 1) 00:29:07.207 21.821 - 21.943: 99.6516% ( 1) 00:29:07.207 22.065 - 22.187: 99.6641% ( 1) 00:29:07.207 22.187 - 22.309: 99.6765% ( 1) 00:29:07.207 22.309 - 22.430: 99.6889% ( 1) 00:29:07.207 22.674 - 22.796: 99.7014% ( 1) 00:29:07.207 22.918 - 23.040: 99.7138% ( 1) 00:29:07.207 23.284 - 23.406: 99.7263% ( 1) 00:29:07.207 24.015 - 24.137: 99.7387% ( 1) 00:29:07.207 24.137 - 24.259: 99.7885% ( 4) 00:29:07.207 24.259 - 24.381: 99.8134% ( 2) 00:29:07.207 24.381 - 24.503: 99.8507% ( 3) 00:29:07.207 24.990 - 25.112: 99.8631% ( 1) 00:29:07.207 25.478 - 25.600: 99.8756% ( 1) 00:29:07.207 26.697 - 26.819: 99.8880% ( 1) 00:29:07.207 28.160 - 28.282: 99.9005% ( 1) 00:29:07.207 28.526 - 28.648: 99.9129% ( 1) 00:29:07.207 28.648 - 28.770: 99.9253% ( 1) 00:29:07.207 29.989 - 30.110: 99.9378% ( 1) 00:29:07.207 30.232 - 30.354: 99.9502% ( 1) 00:29:07.207 44.373 - 44.617: 99.9627% ( 1) 00:29:07.207 47.055 - 47.299: 99.9751% ( 1) 00:29:07.207 52.907 - 53.150: 99.9876% ( 1) 00:29:07.207 53.394 - 53.638: 100.0000% ( 1) 00:29:07.207 00:29:07.207 Complete histogram 00:29:07.207 ================== 00:29:07.207 Range in us Cumulative Count 00:29:07.207 7.741 - 7.771: 0.3235% ( 26) 00:29:07.207 7.771 - 7.802: 1.3189% ( 80) 00:29:07.207 7.802 - 7.863: 15.9637% ( 1177) 00:29:07.207 7.863 - 7.924: 49.4961% ( 2695) 00:29:07.207 7.924 - 7.985: 73.5722% ( 1935) 00:29:07.207 7.985 - 8.046: 83.9119% ( 831) 00:29:07.207 8.046 - 8.107: 88.7023% ( 385) 00:29:07.207 8.107 - 8.168: 91.3401% ( 212) 00:29:07.207 8.168 - 8.229: 92.6092% ( 102) 00:29:07.207 8.229 - 8.290: 93.2064% ( 48) 00:29:07.207 8.290 - 8.350: 93.4677% ( 21) 00:29:07.207 8.350 - 8.411: 93.5921% ( 10) 00:29:07.207 8.411 - 8.472: 93.6668% ( 6) 00:29:07.207 8.472 - 8.533: 93.9779% ( 25) 00:29:07.207 8.533 - 8.594: 94.6497% ( 54) 00:29:07.207 8.594 - 8.655: 95.4709% ( 66) 00:29:07.207 8.655 - 8.716: 96.4912% ( 82) 00:29:07.207 8.716 - 8.777: 97.1134% ( 50) 00:29:07.207 8.777 - 8.838: 97.6608% ( 44) 00:29:07.207 8.838 - 8.899: 97.8599% ( 16) 00:29:07.207 8.899 - 8.960: 98.0092% ( 12) 00:29:07.207 8.960 - 9.021: 98.0341% ( 2) 00:29:07.207 9.021 - 9.082: 98.1336% ( 8) 00:29:07.207 9.082 - 9.143: 98.2456% ( 9) 00:29:07.207 9.143 - 9.204: 98.3327% ( 7) 00:29:07.207 9.204 - 9.265: 98.3700% ( 3) 00:29:07.207 9.265 - 9.326: 98.3825% ( 1) 00:29:07.207 9.387 - 9.448: 98.4447% ( 5) 00:29:07.207 9.448 - 9.509: 98.5940% ( 12) 00:29:07.207 9.509 - 9.570: 98.6935% ( 8) 00:29:07.207 9.570 - 9.630: 98.7060% ( 1) 00:29:07.207 9.752 - 9.813: 98.7184% ( 1) 00:29:07.207 9.874 - 9.935: 98.7309% ( 1) 00:29:07.207 9.935 - 9.996: 98.7433% ( 1) 00:29:07.207 9.996 - 10.057: 98.7558% ( 1) 00:29:07.207 10.057 - 10.118: 98.7682% ( 1) 00:29:07.207 10.362 - 10.423: 98.7806% ( 1) 00:29:07.207 10.728 - 10.789: 98.7931% ( 1) 00:29:07.207 10.789 - 10.850: 98.8180% ( 2) 00:29:07.207 11.093 - 11.154: 98.8304% ( 1) 00:29:07.207 11.337 - 11.398: 98.8429% ( 1) 00:29:07.207 11.398 - 11.459: 98.8553% ( 1) 00:29:07.207 11.459 - 11.520: 98.8802% ( 2) 00:29:07.207 11.581 - 11.642: 98.8926% ( 1) 00:29:07.207 11.642 - 11.703: 98.9051% ( 1) 00:29:07.207 11.703 - 11.764: 98.9299% ( 2) 00:29:07.207 11.764 - 11.825: 98.9548% ( 2) 00:29:07.207 11.825 - 11.886: 98.9673% ( 1) 00:29:07.207 11.886 - 11.947: 98.9797% ( 1) 00:29:07.207 12.008 - 12.069: 98.9922% ( 1) 00:29:07.207 12.251 - 12.312: 99.0170% ( 2) 00:29:07.207 12.434 - 12.495: 99.0419% ( 2) 00:29:07.207 12.556 - 12.617: 99.0544% ( 1) 00:29:07.207 12.617 - 12.678: 99.0668% ( 1) 00:29:07.207 12.800 - 12.861: 99.0793% ( 1) 00:29:07.207 12.983 - 13.044: 99.0917% ( 1) 00:29:07.207 13.044 - 13.105: 99.1166% ( 2) 00:29:07.207 13.166 - 13.227: 99.1290% ( 1) 00:29:07.207 13.227 - 13.288: 99.1539% ( 2) 00:29:07.207 13.288 - 13.349: 99.2037% ( 4) 00:29:07.207 13.349 - 13.410: 99.2286% ( 2) 00:29:07.207 13.410 - 13.470: 99.2410% ( 1) 00:29:07.207 13.470 - 13.531: 99.2535% ( 1) 00:29:07.207 13.531 - 13.592: 99.2659% ( 1) 00:29:07.207 13.653 - 13.714: 99.2783% ( 1) 00:29:07.207 13.714 - 13.775: 99.3032% ( 2) 00:29:07.207 13.775 - 13.836: 99.3157% ( 1) 00:29:07.207 13.836 - 13.897: 99.3405% ( 2) 00:29:07.207 13.897 - 13.958: 99.3530% ( 1) 00:29:07.207 13.958 - 14.019: 99.3654% ( 1) 00:29:07.207 14.019 - 14.080: 99.3779% ( 1) 00:29:07.207 14.080 - 14.141: 99.3903% ( 1) 00:29:07.207 14.141 - 14.202: 99.4028% ( 1) 00:29:07.207 14.202 - 14.263: 99.4152% ( 1) 00:29:07.207 14.263 - 14.324: 99.4401% ( 2) 00:29:07.207 14.324 - 14.385: 99.4525% ( 1) 00:29:07.207 14.507 - 14.568: 99.4650% ( 1) 00:29:07.207 14.811 - 14.872: 99.4774% ( 1) 00:29:07.207 14.872 - 14.933: 99.4899% ( 1) 00:29:07.207 15.055 - 15.116: 99.5023% ( 1) 00:29:07.207 15.360 - 15.421: 99.5147% ( 1) 00:29:07.207 15.726 - 15.848: 99.5272% ( 1) 00:29:07.207 15.848 - 15.970: 99.5396% ( 1) 00:29:07.207 16.335 - 16.457: 99.5521% ( 1) 00:29:07.207 16.457 - 16.579: 99.5645% ( 1) 00:29:07.207 17.920 - 18.042: 99.5770% ( 1) 00:29:07.207 18.408 - 18.530: 99.6143% ( 3) 00:29:07.207 18.530 - 18.651: 99.6392% ( 2) 00:29:07.207 18.651 - 18.773: 99.6641% ( 2) 00:29:07.207 18.773 - 18.895: 99.6889% ( 2) 00:29:07.207 18.895 - 19.017: 99.7014% ( 1) 00:29:07.207 19.017 - 19.139: 99.7512% ( 4) 00:29:07.207 19.139 - 19.261: 99.7636% ( 1) 00:29:07.207 19.261 - 19.383: 99.7760% ( 1) 00:29:07.207 19.505 - 19.627: 99.8009% ( 2) 00:29:07.207 20.236 - 20.358: 99.8134% ( 1) 00:29:07.207 24.015 - 24.137: 99.8382% ( 2) 00:29:07.207 24.625 - 24.747: 99.8631% ( 2) 00:29:07.207 24.747 - 24.869: 99.8756% ( 1) 00:29:07.207 27.185 - 27.307: 99.8880% ( 1) 00:29:07.207 28.404 - 28.526: 99.9005% ( 1) 00:29:07.207 29.745 - 29.867: 99.9129% ( 1) 00:29:07.207 31.208 - 31.451: 99.9378% ( 2) 00:29:07.207 32.427 - 32.670: 99.9502% ( 1) 00:29:07.207 48.274 - 48.518: 99.9627% ( 1) 00:29:07.207 60.221 - 60.465: 99.9751% ( 1) 00:29:07.207 77.044 - 77.531: 99.9876% ( 1) 00:29:07.207 77.531 - 78.019: 100.0000% ( 1) 00:29:07.207 00:29:07.207 00:29:07.207 real 0m1.356s 00:29:07.207 user 0m1.141s 00:29:07.207 sys 0m0.150s 00:29:07.207 06:21:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.207 06:21:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.207 ************************************ 00:29:07.207 END TEST nvme_overhead 00:29:07.208 ************************************ 00:29:07.208 06:21:37 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:07.208 06:21:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:29:07.208 06:21:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.208 06:21:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.208 ************************************ 00:29:07.208 START TEST nvme_arbitration 00:29:07.208 ************************************ 00:29:07.208 06:21:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:29:11.388 Initializing NVMe Controllers 00:29:11.388 Attached to 0000:00:06.0 00:29:11.388 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:29:11.388 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:29:11.388 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:29:11.388 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:29:11.388 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:29:11.388 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:29:11.388 Initialization complete. Launching workers. 00:29:11.388 Starting thread on core 1 with urgent priority queue 00:29:11.388 Starting thread on core 2 with urgent priority queue 00:29:11.388 Starting thread on core 0 with urgent priority queue 00:29:11.388 Starting thread on core 3 with urgent priority queue 00:29:11.388 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:29:11.388 QEMU NVMe Ctrl (12340 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:29:11.388 QEMU NVMe Ctrl (12340 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:29:11.388 QEMU NVMe Ctrl (12340 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 00:29:11.388 ======================================================== 00:29:11.388 00:29:11.388 00:29:11.388 real 0m3.583s 00:29:11.388 user 0m9.617s 00:29:11.388 sys 0m0.201s 00:29:11.388 06:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.388 ************************************ 00:29:11.388 END TEST nvme_arbitration 00:29:11.388 06:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.388 ************************************ 00:29:11.388 06:21:41 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:29:11.388 06:21:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:11.388 06:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.388 06:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.388 ************************************ 00:29:11.388 START TEST nvme_single_aen 00:29:11.388 ************************************ 00:29:11.388 06:21:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:29:11.388 [2024-06-11 06:21:41.546307] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:11.388 [2024-06-11 06:21:41.546434] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.388 [2024-06-11 06:21:41.776089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:11.388 Asynchronous Event Request test 00:29:11.388 Attached to 0000:00:06.0 00:29:11.388 Reset controller to setup AER completions for this process 00:29:11.388 Registering asynchronous event callbacks... 00:29:11.388 Getting orig temperature thresholds of all controllers 00:29:11.388 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:11.388 Setting all controllers temperature threshold low to trigger AER 00:29:11.388 Waiting for all controllers temperature threshold to be set lower 00:29:11.388 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:11.389 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:29:11.389 Waiting for all controllers to trigger AER and reset threshold 00:29:11.389 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:11.389 Cleaning up... 00:29:11.389 00:29:11.389 real 0m0.346s 00:29:11.389 user 0m0.121s 00:29:11.389 sys 0m0.152s 00:29:11.389 06:21:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:11.389 ************************************ 00:29:11.389 END TEST nvme_single_aen 00:29:11.389 ************************************ 00:29:11.389 06:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.389 06:21:41 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:29:11.389 06:21:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:11.389 06:21:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.389 06:21:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.389 ************************************ 00:29:11.389 START TEST nvme_doorbell_aers 00:29:11.389 ************************************ 00:29:11.389 06:21:41 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:29:11.389 06:21:41 -- nvme/nvme.sh@70 -- # bdfs=() 00:29:11.389 06:21:41 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:29:11.389 06:21:41 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:29:11.389 06:21:41 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:29:11.389 06:21:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:11.389 06:21:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:11.389 06:21:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:11.389 06:21:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:11.389 06:21:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:11.389 06:21:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:11.389 06:21:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:11.389 06:21:41 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:29:11.389 06:21:41 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:12.077 [2024-06-11 06:21:42.345218] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135262) is not found. Dropping the request. 00:29:22.052 Executing: test_write_invalid_db 00:29:22.052 Waiting for AER completion... 00:29:22.052 Failure: test_write_invalid_db 00:29:22.052 00:29:22.052 Executing: test_invalid_db_write_overflow_sq 00:29:22.052 Waiting for AER completion... 00:29:22.052 Failure: test_invalid_db_write_overflow_sq 00:29:22.052 00:29:22.052 Executing: test_invalid_db_write_overflow_cq 00:29:22.052 Waiting for AER completion... 00:29:22.052 Failure: test_invalid_db_write_overflow_cq 00:29:22.052 00:29:22.052 00:29:22.052 real 0m10.130s 00:29:22.052 user 0m7.227s 00:29:22.052 sys 0m2.809s 00:29:22.052 06:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:22.052 06:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.052 ************************************ 00:29:22.052 END TEST nvme_doorbell_aers 00:29:22.052 ************************************ 00:29:22.052 06:21:52 -- nvme/nvme.sh@97 -- # uname 00:29:22.052 06:21:52 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:29:22.052 06:21:52 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:22.052 06:21:52 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:29:22.052 06:21:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:22.052 06:21:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.052 ************************************ 00:29:22.052 START TEST nvme_multi_aen 00:29:22.052 ************************************ 00:29:22.052 06:21:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:29:22.052 [2024-06-11 06:21:52.151042] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:22.052 [2024-06-11 06:21:52.151217] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.052 [2024-06-11 06:21:52.367569] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:22.052 [2024-06-11 06:21:52.367637] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135262) is not found. Dropping the request. 00:29:22.052 [2024-06-11 06:21:52.367727] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135262) is not found. Dropping the request. 00:29:22.052 [2024-06-11 06:21:52.367757] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135262) is not found. Dropping the request. 00:29:22.052 Child process pid: 135452 00:29:22.052 [2024-06-11 06:21:52.371341] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:22.052 [2024-06-11 06:21:52.371444] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.951 [Child] Asynchronous Event Request test 00:29:23.951 [Child] Attached to 0000:00:06.0 00:29:23.951 [Child] Registering asynchronous event callbacks... 00:29:23.951 [Child] Getting orig temperature thresholds of all controllers 00:29:23.951 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:23.951 [Child] Waiting for all controllers to trigger AER and reset threshold 00:29:23.951 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:23.951 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:23.951 [Child] Cleaning up... 00:29:23.951 Asynchronous Event Request test 00:29:23.952 Attached to 0000:00:06.0 00:29:23.952 Reset controller to setup AER completions for this process 00:29:23.952 Registering asynchronous event callbacks... 00:29:23.952 Getting orig temperature thresholds of all controllers 00:29:23.952 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:29:23.952 Setting all controllers temperature threshold low to trigger AER 00:29:23.952 Waiting for all controllers temperature threshold to be set lower 00:29:23.952 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:29:23.952 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:29:23.952 Waiting for all controllers to trigger AER and reset threshold 00:29:23.952 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:29:23.952 Cleaning up... 00:29:23.952 00:29:23.952 real 0m2.359s 00:29:23.952 user 0m1.747s 00:29:23.952 sys 0m0.725s 00:29:23.952 06:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.952 06:21:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.952 ************************************ 00:29:23.952 END TEST nvme_multi_aen 00:29:23.952 ************************************ 00:29:23.952 06:21:54 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:29:23.952 06:21:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:23.952 06:21:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.952 06:21:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.952 ************************************ 00:29:23.952 START TEST nvme_startup 00:29:23.952 ************************************ 00:29:23.952 06:21:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:29:24.520 Initializing NVMe Controllers 00:29:24.520 Attached to 0000:00:06.0 00:29:24.520 Initialization complete. 00:29:24.520 Time used:265783.281 (us). 00:29:24.520 00:29:24.520 real 0m0.379s 00:29:24.520 user 0m0.095s 00:29:24.520 sys 0m0.186s 00:29:24.520 06:21:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.520 ************************************ 00:29:24.520 END TEST nvme_startup 00:29:24.520 ************************************ 00:29:24.520 06:21:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.520 06:21:54 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:29:24.520 06:21:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.520 06:21:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.520 06:21:54 -- common/autotest_common.sh@10 -- # set +x 00:29:24.520 ************************************ 00:29:24.520 START TEST nvme_multi_secondary 00:29:24.520 ************************************ 00:29:24.520 06:21:54 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:29:24.520 06:21:54 -- nvme/nvme.sh@52 -- # pid0=135542 00:29:24.520 06:21:54 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:29:24.520 06:21:54 -- nvme/nvme.sh@54 -- # pid1=135543 00:29:24.520 06:21:54 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:29:24.520 06:21:54 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:27.806 Initializing NVMe Controllers 00:29:27.806 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:27.806 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:27.806 Initialization complete. Launching workers. 00:29:27.806 ======================================================== 00:29:27.806 Latency(us) 00:29:27.806 Device Information : IOPS MiB/s Average min max 00:29:27.806 PCIE (0000:00:06.0) NSID 1 from core 1: 35231.67 137.62 453.87 158.46 1786.14 00:29:27.806 ======================================================== 00:29:27.806 Total : 35231.67 137.62 453.87 158.46 1786.14 00:29:27.806 00:29:28.065 Initializing NVMe Controllers 00:29:28.065 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:28.065 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:28.065 Initialization complete. Launching workers. 00:29:28.066 ======================================================== 00:29:28.066 Latency(us) 00:29:28.066 Device Information : IOPS MiB/s Average min max 00:29:28.066 PCIE (0000:00:06.0) NSID 1 from core 2: 14592.00 57.00 1096.27 175.07 20696.26 00:29:28.066 ======================================================== 00:29:28.066 Total : 14592.00 57.00 1096.27 175.07 20696.26 00:29:28.066 00:29:28.066 06:21:58 -- nvme/nvme.sh@56 -- # wait 135542 00:29:29.972 Initializing NVMe Controllers 00:29:29.972 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:29.972 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:29.972 Initialization complete. Launching workers. 00:29:29.972 ======================================================== 00:29:29.972 Latency(us) 00:29:29.972 Device Information : IOPS MiB/s Average min max 00:29:29.972 PCIE (0000:00:06.0) NSID 1 from core 0: 42352.15 165.44 377.50 135.06 16034.32 00:29:29.972 ======================================================== 00:29:29.972 Total : 42352.15 165.44 377.50 135.06 16034.32 00:29:29.972 00:29:30.231 06:22:00 -- nvme/nvme.sh@57 -- # wait 135543 00:29:30.231 06:22:00 -- nvme/nvme.sh@61 -- # pid0=135618 00:29:30.231 06:22:00 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:29:30.231 06:22:00 -- nvme/nvme.sh@63 -- # pid1=135619 00:29:30.231 06:22:00 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:29:30.231 06:22:00 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:29:33.518 Initializing NVMe Controllers 00:29:33.518 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:33.518 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:29:33.518 Initialization complete. Launching workers. 00:29:33.518 ======================================================== 00:29:33.518 Latency(us) 00:29:33.518 Device Information : IOPS MiB/s Average min max 00:29:33.518 PCIE (0000:00:06.0) NSID 1 from core 1: 35164.13 137.36 454.71 159.94 16627.52 00:29:33.518 ======================================================== 00:29:33.518 Total : 35164.13 137.36 454.71 159.94 16627.52 00:29:33.518 00:29:33.777 Initializing NVMe Controllers 00:29:33.777 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:33.777 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:29:33.777 Initialization complete. Launching workers. 00:29:33.777 ======================================================== 00:29:33.777 Latency(us) 00:29:33.777 Device Information : IOPS MiB/s Average min max 00:29:33.777 PCIE (0000:00:06.0) NSID 1 from core 0: 36420.43 142.27 439.02 155.81 1320.49 00:29:33.777 ======================================================== 00:29:33.777 Total : 36420.43 142.27 439.02 155.81 1320.49 00:29:33.777 00:29:35.682 Initializing NVMe Controllers 00:29:35.682 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:35.682 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:35.682 Initialization complete. Launching workers. 00:29:35.682 ======================================================== 00:29:35.682 Latency(us) 00:29:35.682 Device Information : IOPS MiB/s Average min max 00:29:35.682 PCIE (0000:00:06.0) NSID 1 from core 2: 18841.20 73.60 848.80 111.75 32324.80 00:29:35.682 ======================================================== 00:29:35.682 Total : 18841.20 73.60 848.80 111.75 32324.80 00:29:35.682 00:29:35.682 ************************************ 00:29:35.682 END TEST nvme_multi_secondary 00:29:35.682 ************************************ 00:29:35.682 06:22:06 -- nvme/nvme.sh@65 -- # wait 135618 00:29:35.682 06:22:06 -- nvme/nvme.sh@66 -- # wait 135619 00:29:35.682 00:29:35.682 real 0m11.242s 00:29:35.682 user 0m18.640s 00:29:35.682 sys 0m1.075s 00:29:35.682 06:22:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.682 06:22:06 -- common/autotest_common.sh@10 -- # set +x 00:29:35.682 06:22:06 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:29:35.682 06:22:06 -- nvme/nvme.sh@102 -- # kill_stub 00:29:35.682 06:22:06 -- common/autotest_common.sh@1065 -- # [[ -e /proc/134792 ]] 00:29:35.682 06:22:06 -- common/autotest_common.sh@1066 -- # kill 134792 00:29:35.682 06:22:06 -- common/autotest_common.sh@1067 -- # wait 134792 00:29:36.250 [2024-06-11 06:22:06.809055] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135448) is not found. Dropping the request. 00:29:36.250 [2024-06-11 06:22:06.809204] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135448) is not found. Dropping the request. 00:29:36.509 [2024-06-11 06:22:06.809264] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135448) is not found. Dropping the request. 00:29:36.509 [2024-06-11 06:22:06.809306] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135448) is not found. Dropping the request. 00:29:37.078 06:22:07 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:29:37.078 06:22:07 -- common/autotest_common.sh@1073 -- # echo 2 00:29:37.078 06:22:07 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:37.078 06:22:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:37.078 06:22:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:37.078 06:22:07 -- common/autotest_common.sh@10 -- # set +x 00:29:37.078 ************************************ 00:29:37.078 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:37.078 ************************************ 00:29:37.078 06:22:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:37.078 * Looking for test storage... 00:29:37.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:37.078 06:22:07 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:37.078 06:22:07 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:37.078 06:22:07 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:37.078 06:22:07 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:37.078 06:22:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:37.078 06:22:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:37.078 06:22:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:37.078 06:22:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:37.078 06:22:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:37.078 06:22:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:37.078 06:22:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:37.078 06:22:07 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=135783 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:37.078 06:22:07 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 135783 00:29:37.078 06:22:07 -- common/autotest_common.sh@819 -- # '[' -z 135783 ']' 00:29:37.078 06:22:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.078 06:22:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:37.078 06:22:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.078 06:22:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:37.078 06:22:07 -- common/autotest_common.sh@10 -- # set +x 00:29:37.337 [2024-06-11 06:22:07.776249] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:37.337 [2024-06-11 06:22:07.776469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135783 ] 00:29:37.596 [2024-06-11 06:22:08.015481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.856 [2024-06-11 06:22:08.292024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:37.856 [2024-06-11 06:22:08.292469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.856 [2024-06-11 06:22:08.292532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.856 [2024-06-11 06:22:08.293207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.856 [2024-06-11 06:22:08.293206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.792 06:22:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:38.792 06:22:09 -- common/autotest_common.sh@852 -- # return 0 00:29:38.792 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:29:38.792 06:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.792 06:22:09 -- common/autotest_common.sh@10 -- # set +x 00:29:39.051 nvme0n1 00:29:39.051 06:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_oramg.txt 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:39.051 06:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.051 06:22:09 -- common/autotest_common.sh@10 -- # set +x 00:29:39.051 true 00:29:39.051 06:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1718086929 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=135819 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:39.051 06:22:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:40.954 06:22:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.954 06:22:11 -- common/autotest_common.sh@10 -- # set +x 00:29:40.954 [2024-06-11 06:22:11.484449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:40.954 [2024-06-11 06:22:11.484863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:40.954 [2024-06-11 06:22:11.484949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:40.954 [2024-06-11 06:22:11.484975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.954 [2024-06-11 06:22:11.486994] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:40.954 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 135819 00:29:40.954 06:22:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 135819 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 135819 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:40.954 06:22:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.954 06:22:11 -- common/autotest_common.sh@10 -- # set +x 00:29:40.954 06:22:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_oramg.txt 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:40.954 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:40.955 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_oramg.txt 00:29:41.214 06:22:11 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 135783 00:29:41.214 06:22:11 -- common/autotest_common.sh@926 -- # '[' -z 135783 ']' 00:29:41.214 06:22:11 -- common/autotest_common.sh@930 -- # kill -0 135783 00:29:41.214 06:22:11 -- common/autotest_common.sh@931 -- # uname 00:29:41.214 06:22:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:41.214 06:22:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135783 00:29:41.214 06:22:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:41.214 06:22:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:41.214 killing process with pid 135783 00:29:41.214 06:22:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135783' 00:29:41.214 06:22:11 -- common/autotest_common.sh@945 -- # kill 135783 00:29:41.214 06:22:11 -- common/autotest_common.sh@950 -- # wait 135783 00:29:43.748 06:22:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:43.748 06:22:13 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:43.748 00:29:43.748 real 0m6.386s 00:29:43.748 user 0m21.785s 00:29:43.748 sys 0m1.049s 00:29:43.748 06:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.748 06:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:43.748 ************************************ 00:29:43.748 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:43.748 ************************************ 00:29:43.748 06:22:13 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:43.748 06:22:13 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:43.748 06:22:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:43.748 06:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.748 06:22:13 -- common/autotest_common.sh@10 -- # set +x 00:29:43.748 ************************************ 00:29:43.748 START TEST nvme_fio 00:29:43.748 ************************************ 00:29:43.748 06:22:13 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:29:43.748 06:22:13 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:43.748 06:22:13 -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:43.748 06:22:13 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:43.748 06:22:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:43.748 06:22:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:43.748 06:22:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:43.748 06:22:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:43.748 06:22:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:43.748 06:22:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:43.748 06:22:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:43.748 06:22:13 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:29:43.748 06:22:13 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:43.748 06:22:13 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:43.748 06:22:13 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:43.748 06:22:13 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:43.748 06:22:14 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:43.748 06:22:14 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:44.007 06:22:14 -- nvme/nvme.sh@41 -- # bs=4096 00:29:44.007 06:22:14 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:44.007 06:22:14 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:44.007 06:22:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:29:44.007 06:22:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:44.007 06:22:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:29:44.007 06:22:14 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:44.007 06:22:14 -- common/autotest_common.sh@1320 -- # shift 00:29:44.007 06:22:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:29:44.007 06:22:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:29:44.007 06:22:14 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:44.007 06:22:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:29:44.007 06:22:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:29:44.007 06:22:14 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:29:44.007 06:22:14 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:29:44.007 06:22:14 -- common/autotest_common.sh@1326 -- # break 00:29:44.007 06:22:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:44.007 06:22:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:44.267 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:44.267 fio-3.35 00:29:44.267 Starting 1 thread 00:29:47.556 00:29:47.556 test: (groupid=0, jobs=1): err= 0: pid=135966: Tue Jun 11 06:22:17 2024 00:29:47.556 read: IOPS=18.6k, BW=72.7MiB/s (76.2MB/s)(145MiB/2001msec) 00:29:47.556 slat (usec): min=4, max=167, avg= 5.69, stdev= 2.18 00:29:47.556 clat (usec): min=218, max=14557, avg=3416.13, stdev=427.21 00:29:47.556 lat (usec): min=223, max=14724, avg=3421.82, stdev=427.80 00:29:47.556 clat percentiles (usec): 00:29:47.556 | 1.00th=[ 2933], 5.00th=[ 3097], 10.00th=[ 3163], 20.00th=[ 3228], 00:29:47.556 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3392], 00:29:47.556 | 70.00th=[ 3458], 80.00th=[ 3523], 90.00th=[ 3654], 95.00th=[ 4015], 00:29:47.556 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 8717], 99.95th=[11338], 00:29:47.556 | 99.99th=[14222] 00:29:47.556 bw ( KiB/s): min=70802, max=75176, per=98.50%, avg=73278.00, stdev=2243.55, samples=3 00:29:47.556 iops : min=17700, max=18794, avg=18319.33, stdev=561.16, samples=3 00:29:47.556 write: IOPS=18.6k, BW=72.7MiB/s (76.2MB/s)(145MiB/2001msec); 0 zone resets 00:29:47.556 slat (nsec): min=4428, max=35433, avg=5866.64, stdev=1986.67 00:29:47.556 clat (usec): min=227, max=14299, avg=3430.37, stdev=433.82 00:29:47.556 lat (usec): min=232, max=14324, avg=3436.24, stdev=434.32 00:29:47.556 clat percentiles (usec): 00:29:47.556 | 1.00th=[ 2933], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3228], 00:29:47.556 | 30.00th=[ 3294], 40.00th=[ 3326], 50.00th=[ 3359], 60.00th=[ 3425], 00:29:47.556 | 70.00th=[ 3458], 80.00th=[ 3523], 90.00th=[ 3687], 95.00th=[ 4047], 00:29:47.556 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 8979], 99.95th=[11600], 00:29:47.556 | 99.99th=[13829] 00:29:47.556 bw ( KiB/s): min=70906, max=74432, per=98.23%, avg=73139.33, stdev=1942.11, samples=3 00:29:47.556 iops : min=17726, max=18608, avg=18284.67, stdev=485.82, samples=3 00:29:47.556 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:29:47.556 lat (msec) : 2=0.13%, 4=94.39%, 10=5.35%, 20=0.08% 00:29:47.556 cpu : usr=99.85%, sys=0.05%, ctx=4, majf=0, minf=36 00:29:47.556 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:47.556 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.556 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.556 issued rwts: total=37217,37247,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.556 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.556 00:29:47.556 Run status group 0 (all jobs): 00:29:47.556 READ: bw=72.7MiB/s (76.2MB/s), 72.7MiB/s-72.7MiB/s (76.2MB/s-76.2MB/s), io=145MiB (152MB), run=2001-2001msec 00:29:47.556 WRITE: bw=72.7MiB/s (76.2MB/s), 72.7MiB/s-72.7MiB/s (76.2MB/s-76.2MB/s), io=145MiB (153MB), run=2001-2001msec 00:29:47.816 ----------------------------------------------------- 00:29:47.816 Suppressions used: 00:29:47.816 count bytes template 00:29:47.816 1 32 /usr/src/fio/parse.c 00:29:47.816 ----------------------------------------------------- 00:29:47.816 00:29:47.816 06:22:18 -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:47.816 06:22:18 -- nvme/nvme.sh@46 -- # true 00:29:47.816 00:29:47.816 real 0m4.385s 00:29:47.816 user 0m3.587s 00:29:47.816 sys 0m0.490s 00:29:47.816 06:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.816 06:22:18 -- common/autotest_common.sh@10 -- # set +x 00:29:47.816 ************************************ 00:29:47.816 END TEST nvme_fio 00:29:47.816 ************************************ 00:29:47.816 00:29:47.816 real 0m54.030s 00:29:47.816 user 2m15.681s 00:29:47.816 sys 0m13.501s 00:29:47.816 06:22:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:47.816 06:22:18 -- common/autotest_common.sh@10 -- # set +x 00:29:47.816 ************************************ 00:29:47.816 END TEST nvme 00:29:47.816 ************************************ 00:29:47.816 06:22:18 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:29:47.816 06:22:18 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:47.816 06:22:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:47.816 06:22:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:47.816 06:22:18 -- common/autotest_common.sh@10 -- # set +x 00:29:47.816 ************************************ 00:29:47.816 START TEST nvme_scc 00:29:47.816 ************************************ 00:29:47.816 06:22:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:48.075 * Looking for test storage... 00:29:48.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:48.075 06:22:18 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:48.075 06:22:18 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:48.075 06:22:18 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:48.076 06:22:18 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:48.076 06:22:18 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:48.076 06:22:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.076 06:22:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.076 06:22:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.076 06:22:18 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:48.076 06:22:18 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:48.076 06:22:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:48.076 06:22:18 -- paths/export.sh@5 -- # export PATH 00:29:48.076 06:22:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:48.076 06:22:18 -- nvme/functions.sh@10 -- # ctrls=() 00:29:48.076 06:22:18 -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:48.076 06:22:18 -- nvme/functions.sh@11 -- # nvmes=() 00:29:48.076 06:22:18 -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:48.076 06:22:18 -- nvme/functions.sh@12 -- # bdfs=() 00:29:48.076 06:22:18 -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:48.076 06:22:18 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:48.076 06:22:18 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:48.076 06:22:18 -- nvme/functions.sh@14 -- # nvme_name= 00:29:48.076 06:22:18 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:48.076 06:22:18 -- nvme/nvme_scc.sh@12 -- # uname 00:29:48.076 06:22:18 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:29:48.076 06:22:18 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:29:48.076 06:22:18 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:48.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:48.596 Waiting for block devices as requested 00:29:48.596 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:48.596 06:22:19 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:48.596 06:22:19 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:48.596 06:22:19 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:48.596 06:22:19 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:29:48.596 06:22:19 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:29:48.596 06:22:19 -- scripts/common.sh@15 -- # local i 00:29:48.596 06:22:19 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:48.596 06:22:19 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:48.596 06:22:19 -- scripts/common.sh@24 -- # return 0 00:29:48.596 06:22:19 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:48.596 06:22:19 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:48.596 06:22:19 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@18 -- # shift 00:29:48.596 06:22:19 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.596 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.596 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:48.596 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:48.597 06:22:19 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.597 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.597 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.598 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:48.598 06:22:19 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:48.598 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.598 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.598 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.598 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:48.598 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:48.598 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.598 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.598 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.598 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:48.598 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.861 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:48.861 06:22:19 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.861 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:48.862 06:22:19 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:48.862 06:22:19 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:48.862 06:22:19 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:48.862 06:22:19 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@18 -- # shift 00:29:48.862 06:22:19 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:48.862 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.862 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.862 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.863 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:48.863 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:48.863 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:48.864 06:22:19 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # IFS=: 00:29:48.864 06:22:19 -- nvme/functions.sh@21 -- # read -r reg val 00:29:48.864 06:22:19 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:48.864 06:22:19 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:48.864 06:22:19 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:29:48.864 06:22:19 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:29:48.864 06:22:19 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:48.864 06:22:19 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:48.864 06:22:19 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:48.864 06:22:19 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:48.864 06:22:19 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:29:48.864 06:22:19 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:48.864 06:22:19 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:48.864 06:22:19 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:48.864 06:22:19 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:48.864 06:22:19 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:48.864 06:22:19 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:48.864 06:22:19 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:48.864 06:22:19 -- nvme/functions.sh@76 -- # echo 0x15d 00:29:48.864 06:22:19 -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:48.864 06:22:19 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:48.864 06:22:19 -- nvme/functions.sh@197 -- # echo nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:48.864 06:22:19 -- nvme/functions.sh@206 -- # echo nvme0 00:29:48.864 06:22:19 -- nvme/functions.sh@207 -- # return 0 00:29:48.864 06:22:19 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:29:48.864 06:22:19 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:29:48.864 06:22:19 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:49.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:49.449 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:51.354 06:22:21 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:51.354 06:22:21 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:29:51.354 06:22:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.354 06:22:21 -- common/autotest_common.sh@10 -- # set +x 00:29:51.354 ************************************ 00:29:51.354 START TEST nvme_simple_copy 00:29:51.354 ************************************ 00:29:51.354 06:22:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:51.922 Initializing NVMe Controllers 00:29:51.922 Attaching to 0000:00:06.0 00:29:51.922 Controller supports SCC. Attached to 0000:00:06.0 00:29:51.922 Namespace ID: 1 size: 5GB 00:29:51.922 Initialization complete. 00:29:51.922 00:29:51.922 Controller QEMU NVMe Ctrl (12340 ) 00:29:51.922 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:51.922 Namespace Block Size:4096 00:29:51.922 Writing LBAs 0 to 63 with Random Data 00:29:51.922 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:51.922 LBAs matching Written Data: 64 00:29:51.922 00:29:51.922 real 0m0.367s 00:29:51.922 user 0m0.149s 00:29:51.922 sys 0m0.119s 00:29:51.922 06:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.922 06:22:22 -- common/autotest_common.sh@10 -- # set +x 00:29:51.922 ************************************ 00:29:51.922 END TEST nvme_simple_copy 00:29:51.922 ************************************ 00:29:51.922 00:29:51.922 real 0m3.955s 00:29:51.922 user 0m0.807s 00:29:51.922 sys 0m3.051s 00:29:51.922 06:22:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.922 06:22:22 -- common/autotest_common.sh@10 -- # set +x 00:29:51.922 ************************************ 00:29:51.922 END TEST nvme_scc 00:29:51.922 ************************************ 00:29:51.922 06:22:22 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:29:51.922 06:22:22 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:29:51.922 06:22:22 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:29:51.922 06:22:22 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:29:51.922 06:22:22 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:29:51.922 06:22:22 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:51.922 06:22:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:51.922 06:22:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.922 06:22:22 -- common/autotest_common.sh@10 -- # set +x 00:29:51.922 ************************************ 00:29:51.922 START TEST nvme_rpc 00:29:51.922 ************************************ 00:29:51.922 06:22:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:52.180 * Looking for test storage... 00:29:52.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:52.180 06:22:22 -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:52.180 06:22:22 -- common/autotest_common.sh@1509 -- # local bdfs 00:29:52.180 06:22:22 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:29:52.180 06:22:22 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:29:52.180 06:22:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:29:52.180 06:22:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:29:52.180 06:22:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:52.180 06:22:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:52.180 06:22:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:29:52.180 06:22:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:29:52.180 06:22:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:29:52.180 06:22:22 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=136458 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:52.180 06:22:22 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 136458 00:29:52.180 06:22:22 -- common/autotest_common.sh@819 -- # '[' -z 136458 ']' 00:29:52.180 06:22:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.180 06:22:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:52.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.180 06:22:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.180 06:22:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:52.180 06:22:22 -- common/autotest_common.sh@10 -- # set +x 00:29:52.181 [2024-06-11 06:22:22.752599] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:52.181 [2024-06-11 06:22:22.752829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136458 ] 00:29:52.439 [2024-06-11 06:22:22.943065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:52.698 [2024-06-11 06:22:23.217104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:52.698 [2024-06-11 06:22:23.217598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.698 [2024-06-11 06:22:23.217613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.632 06:22:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:53.632 06:22:24 -- common/autotest_common.sh@852 -- # return 0 00:29:53.632 06:22:24 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:53.891 Nvme0n1 00:29:53.891 06:22:24 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:53.891 06:22:24 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:54.150 request: 00:29:54.150 { 00:29:54.150 "filename": "non_existing_file", 00:29:54.150 "bdev_name": "Nvme0n1", 00:29:54.150 "method": "bdev_nvme_apply_firmware", 00:29:54.150 "req_id": 1 00:29:54.150 } 00:29:54.150 Got JSON-RPC error response 00:29:54.150 response: 00:29:54.150 { 00:29:54.150 "code": -32603, 00:29:54.150 "message": "open file failed." 00:29:54.150 } 00:29:54.150 06:22:24 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:54.150 06:22:24 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:54.150 06:22:24 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:54.408 06:22:24 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:54.408 06:22:24 -- nvme/nvme_rpc.sh@40 -- # killprocess 136458 00:29:54.408 06:22:24 -- common/autotest_common.sh@926 -- # '[' -z 136458 ']' 00:29:54.408 06:22:24 -- common/autotest_common.sh@930 -- # kill -0 136458 00:29:54.408 06:22:24 -- common/autotest_common.sh@931 -- # uname 00:29:54.408 06:22:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:54.408 06:22:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136458 00:29:54.408 06:22:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:54.408 killing process with pid 136458 00:29:54.408 06:22:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:54.408 06:22:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136458' 00:29:54.408 06:22:24 -- common/autotest_common.sh@945 -- # kill 136458 00:29:54.408 06:22:24 -- common/autotest_common.sh@950 -- # wait 136458 00:29:56.941 00:29:56.941 real 0m4.528s 00:29:56.941 user 0m8.290s 00:29:56.941 sys 0m0.693s 00:29:56.941 06:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.941 ************************************ 00:29:56.941 END TEST nvme_rpc 00:29:56.941 ************************************ 00:29:56.941 06:22:26 -- common/autotest_common.sh@10 -- # set +x 00:29:56.941 06:22:27 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:56.941 06:22:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:56.941 06:22:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.941 06:22:27 -- common/autotest_common.sh@10 -- # set +x 00:29:56.941 ************************************ 00:29:56.941 START TEST nvme_rpc_timeouts 00:29:56.941 ************************************ 00:29:56.941 06:22:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:56.941 * Looking for test storage... 00:29:56.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_136547 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_136547 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=136572 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 136572 00:29:56.941 06:22:27 -- common/autotest_common.sh@819 -- # '[' -z 136572 ']' 00:29:56.941 06:22:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.941 06:22:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:56.941 06:22:27 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:56.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.941 06:22:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.941 06:22:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:56.941 06:22:27 -- common/autotest_common.sh@10 -- # set +x 00:29:56.941 [2024-06-11 06:22:27.258852] Starting SPDK v24.01.1-pre git sha1 130b9406a / DPDK 23.11.0 initialization... 00:29:56.941 [2024-06-11 06:22:27.259066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136572 ] 00:29:56.941 [2024-06-11 06:22:27.443253] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:57.200 [2024-06-11 06:22:27.605673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:57.200 [2024-06-11 06:22:27.606188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.200 [2024-06-11 06:22:27.606156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.134 06:22:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:58.134 06:22:28 -- common/autotest_common.sh@852 -- # return 0 00:29:58.134 06:22:28 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:58.134 Checking default timeout settings: 00:29:58.134 06:22:28 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:58.699 06:22:29 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:58.699 Making settings changes with rpc: 00:29:58.699 06:22:29 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:58.699 06:22:29 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:58.699 Check default vs. modified settings: 00:29:58.699 06:22:29 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:58.957 Setting action_on_timeout is changed as expected. 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:58.957 Setting timeout_us is changed as expected. 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:58.957 Setting timeout_admin_us is changed as expected. 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_136547 /tmp/settings_modified_136547 00:29:58.957 06:22:29 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 136572 00:29:58.957 06:22:29 -- common/autotest_common.sh@926 -- # '[' -z 136572 ']' 00:29:58.957 06:22:29 -- common/autotest_common.sh@930 -- # kill -0 136572 00:29:58.957 06:22:29 -- common/autotest_common.sh@931 -- # uname 00:29:58.957 06:22:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:58.957 06:22:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136572 00:29:58.957 06:22:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:58.957 06:22:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:58.957 06:22:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136572' 00:29:58.957 killing process with pid 136572 00:29:58.957 06:22:29 -- common/autotest_common.sh@945 -- # kill 136572 00:29:58.957 06:22:29 -- common/autotest_common.sh@950 -- # wait 136572 00:30:01.488 06:22:31 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:30:01.488 RPC TIMEOUT SETTING TEST PASSED. 00:30:01.488 00:30:01.488 real 0m4.738s 00:30:01.488 user 0m8.971s 00:30:01.488 sys 0m0.677s 00:30:01.488 06:22:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:01.488 06:22:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.488 ************************************ 00:30:01.488 END TEST nvme_rpc_timeouts 00:30:01.488 ************************************ 00:30:01.488 06:22:31 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:30:01.488 06:22:31 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@268 -- # timing_exit lib 00:30:01.488 06:22:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.488 06:22:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.488 06:22:31 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:01.488 06:22:31 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:01.488 06:22:31 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:01.488 06:22:31 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:01.488 06:22:31 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:30:01.488 06:22:31 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:30:01.488 06:22:31 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:30:01.488 06:22:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:01.488 06:22:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.488 06:22:31 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:30:01.488 06:22:31 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:30:01.488 06:22:31 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:30:01.488 06:22:31 -- common/autotest_common.sh@10 -- # set +x 00:30:04.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:04.022 Waiting for block devices as requested 00:30:04.022 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:04.282 Cleaning 00:30:04.282 Removing: /var/run/dpdk/spdk0/config 00:30:04.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:04.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:04.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:04.282 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:04.282 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:04.282 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:04.282 Removing: /dev/shm/spdk_tgt_trace.pid102933 00:30:04.282 Removing: /var/run/dpdk/spdk0 00:30:04.282 Removing: /var/run/dpdk/spdk_pid102655 00:30:04.541 Removing: /var/run/dpdk/spdk_pid102933 00:30:04.541 Removing: /var/run/dpdk/spdk_pid103234 00:30:04.541 Removing: /var/run/dpdk/spdk_pid103529 00:30:04.541 Removing: /var/run/dpdk/spdk_pid103730 00:30:04.542 Removing: /var/run/dpdk/spdk_pid103864 00:30:04.542 Removing: /var/run/dpdk/spdk_pid103980 00:30:04.542 Removing: /var/run/dpdk/spdk_pid104112 00:30:04.542 Removing: /var/run/dpdk/spdk_pid104233 00:30:04.542 Removing: /var/run/dpdk/spdk_pid104292 00:30:04.542 Removing: /var/run/dpdk/spdk_pid104336 00:30:04.542 Removing: /var/run/dpdk/spdk_pid104420 00:30:04.542 Removing: /var/run/dpdk/spdk_pid104556 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105098 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105194 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105292 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105321 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105507 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105537 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105722 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105750 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105828 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105865 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105934 00:30:04.542 Removing: /var/run/dpdk/spdk_pid105970 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106183 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106233 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106284 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106369 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106470 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106519 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106612 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106659 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106716 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106765 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106818 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106865 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106922 00:30:04.542 Removing: /var/run/dpdk/spdk_pid106966 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107023 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107063 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107123 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107164 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107221 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107263 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107322 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107361 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107414 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107463 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107521 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107561 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107618 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107667 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107724 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107766 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107825 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107872 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107924 00:30:04.542 Removing: /var/run/dpdk/spdk_pid107964 00:30:04.542 Removing: /var/run/dpdk/spdk_pid108018 00:30:04.542 Removing: /var/run/dpdk/spdk_pid108065 00:30:04.542 Removing: /var/run/dpdk/spdk_pid108122 00:30:04.542 Removing: /var/run/dpdk/spdk_pid108166 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108223 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108261 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108321 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108373 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108434 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108475 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108534 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108577 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108628 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108731 00:30:04.801 Removing: /var/run/dpdk/spdk_pid108871 00:30:04.801 Removing: /var/run/dpdk/spdk_pid109072 00:30:04.801 Removing: /var/run/dpdk/spdk_pid109174 00:30:04.801 Removing: /var/run/dpdk/spdk_pid109257 00:30:04.801 Removing: /var/run/dpdk/spdk_pid110530 00:30:04.801 Removing: /var/run/dpdk/spdk_pid110758 00:30:04.801 Removing: /var/run/dpdk/spdk_pid110984 00:30:04.801 Removing: /var/run/dpdk/spdk_pid111116 00:30:04.801 Removing: /var/run/dpdk/spdk_pid111272 00:30:04.802 Removing: /var/run/dpdk/spdk_pid111358 00:30:04.802 Removing: /var/run/dpdk/spdk_pid111396 00:30:04.802 Removing: /var/run/dpdk/spdk_pid111435 00:30:04.802 Removing: /var/run/dpdk/spdk_pid111918 00:30:04.802 Removing: /var/run/dpdk/spdk_pid112014 00:30:04.802 Removing: /var/run/dpdk/spdk_pid112144 00:30:04.802 Removing: /var/run/dpdk/spdk_pid112214 00:30:04.802 Removing: /var/run/dpdk/spdk_pid113427 00:30:04.802 Removing: /var/run/dpdk/spdk_pid114319 00:30:04.802 Removing: /var/run/dpdk/spdk_pid115224 00:30:04.802 Removing: /var/run/dpdk/spdk_pid116338 00:30:04.802 Removing: /var/run/dpdk/spdk_pid117413 00:30:04.802 Removing: /var/run/dpdk/spdk_pid118474 00:30:04.802 Removing: /var/run/dpdk/spdk_pid119973 00:30:04.802 Removing: /var/run/dpdk/spdk_pid121164 00:30:04.802 Removing: /var/run/dpdk/spdk_pid122386 00:30:04.802 Removing: /var/run/dpdk/spdk_pid123048 00:30:04.802 Removing: /var/run/dpdk/spdk_pid123579 00:30:04.802 Removing: /var/run/dpdk/spdk_pid124194 00:30:04.802 Removing: /var/run/dpdk/spdk_pid124683 00:30:04.802 Removing: /var/run/dpdk/spdk_pid125251 00:30:04.802 Removing: /var/run/dpdk/spdk_pid125782 00:30:04.802 Removing: /var/run/dpdk/spdk_pid126428 00:30:04.802 Removing: /var/run/dpdk/spdk_pid126939 00:30:04.802 Removing: /var/run/dpdk/spdk_pid127601 00:30:04.802 Removing: /var/run/dpdk/spdk_pid127659 00:30:04.802 Removing: /var/run/dpdk/spdk_pid127720 00:30:04.802 Removing: /var/run/dpdk/spdk_pid127790 00:30:04.802 Removing: /var/run/dpdk/spdk_pid127932 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128088 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128312 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128611 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128636 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128697 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128729 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128762 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128804 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128836 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128876 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128908 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128947 00:30:04.802 Removing: /var/run/dpdk/spdk_pid128984 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129023 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129055 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129088 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129134 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129162 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129197 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129238 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129279 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129308 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129366 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129403 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129452 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129550 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129603 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129630 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129689 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129722 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129749 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129825 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129859 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129907 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129947 00:30:05.061 Removing: /var/run/dpdk/spdk_pid129976 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130007 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130043 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130072 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130100 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130139 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130190 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130250 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130282 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130343 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130370 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130397 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130467 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130507 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130555 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130588 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130621 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130648 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130680 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130709 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130745 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130774 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130871 00:30:05.061 Removing: /var/run/dpdk/spdk_pid130983 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131155 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131194 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131257 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131317 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131367 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131408 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131444 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131500 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131534 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131630 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131702 00:30:05.061 Removing: /var/run/dpdk/spdk_pid131765 00:30:05.062 Removing: /var/run/dpdk/spdk_pid132040 00:30:05.062 Removing: /var/run/dpdk/spdk_pid132187 00:30:05.062 Removing: /var/run/dpdk/spdk_pid132240 00:30:05.322 Removing: /var/run/dpdk/spdk_pid132334 00:30:05.322 Removing: /var/run/dpdk/spdk_pid132435 00:30:05.322 Removing: /var/run/dpdk/spdk_pid132485 00:30:05.322 Removing: /var/run/dpdk/spdk_pid132771 00:30:05.322 Removing: /var/run/dpdk/spdk_pid132892 00:30:05.322 Removing: /var/run/dpdk/spdk_pid132994 00:30:05.322 Removing: /var/run/dpdk/spdk_pid133056 00:30:05.322 Removing: /var/run/dpdk/spdk_pid133097 00:30:05.322 Removing: /var/run/dpdk/spdk_pid133174 00:30:05.322 Removing: /var/run/dpdk/spdk_pid133625 00:30:05.322 Removing: /var/run/dpdk/spdk_pid133676 00:30:05.322 Removing: /var/run/dpdk/spdk_pid134006 00:30:05.322 Removing: /var/run/dpdk/spdk_pid134121 00:30:05.322 Removing: /var/run/dpdk/spdk_pid134223 00:30:05.322 Removing: /var/run/dpdk/spdk_pid134285 00:30:05.322 Removing: /var/run/dpdk/spdk_pid134324 00:30:05.322 Removing: /var/run/dpdk/spdk_pid134356 00:30:05.322 Removing: /var/run/dpdk/spdk_pid135783 00:30:05.322 Removing: /var/run/dpdk/spdk_pid135931 00:30:05.322 Removing: /var/run/dpdk/spdk_pid135936 00:30:05.322 Removing: /var/run/dpdk/spdk_pid135962 00:30:05.322 Removing: /var/run/dpdk/spdk_pid136458 00:30:05.322 Removing: /var/run/dpdk/spdk_pid136572 00:30:05.322 Clean 00:30:05.322 killing process with pid 92441 00:30:05.581 killing process with pid 92442 00:30:05.581 06:22:35 -- common/autotest_common.sh@1436 -- # return 0 00:30:05.581 06:22:35 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:30:05.581 06:22:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.581 06:22:35 -- common/autotest_common.sh@10 -- # set +x 00:30:05.581 06:22:36 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:30:05.581 06:22:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.581 06:22:36 -- common/autotest_common.sh@10 -- # set +x 00:30:05.581 06:22:36 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:05.581 06:22:36 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:05.581 06:22:36 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:05.581 06:22:36 -- spdk/autotest.sh@394 -- # hash lcov 00:30:05.581 06:22:36 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:05.581 06:22:36 -- spdk/autotest.sh@396 -- # hostname 00:30:05.581 06:22:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:05.840 geninfo: WARNING: invalid characters removed from testname! 00:30:44.567 06:23:13 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:47.858 06:23:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:50.440 06:23:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:52.975 06:23:23 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:55.512 06:23:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:58.803 06:23:28 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:01.340 06:23:31 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:01.340 06:23:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:01.340 06:23:31 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:01.340 06:23:31 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:01.340 06:23:31 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:01.340 06:23:31 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:01.340 06:23:31 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:01.340 06:23:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:01.340 06:23:31 -- paths/export.sh@5 -- $ export PATH 00:31:01.340 06:23:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:01.340 06:23:31 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:01.340 06:23:31 -- common/autobuild_common.sh@435 -- $ date +%s 00:31:01.340 06:23:31 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1718087011.XXXXXX 00:31:01.340 06:23:31 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1718087011.ciPczw 00:31:01.340 06:23:31 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:31:01.340 06:23:31 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:31:01.340 06:23:31 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:01.340 06:23:31 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:01.340 06:23:31 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:01.340 06:23:31 -- common/autobuild_common.sh@451 -- $ get_config_params 00:31:01.340 06:23:31 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:01.340 06:23:31 -- common/autotest_common.sh@10 -- $ set +x 00:31:01.340 06:23:31 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage' 00:31:01.340 06:23:31 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:01.340 06:23:31 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:01.340 06:23:31 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:01.340 06:23:31 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:01.340 06:23:31 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:01.340 06:23:31 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:31:01.340 06:23:31 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:31:01.340 06:23:31 -- common/autotest_common.sh@10 -- $ set +x 00:31:01.340 06:23:31 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:31:01.340 06:23:31 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:31:01.340 06:23:31 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:31:01.340 06:23:31 -- spdk/autopackage.sh@40 -- $ get_config_params 00:31:01.340 06:23:31 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:01.340 06:23:31 -- common/autotest_common.sh@10 -- $ set +x 00:31:01.340 06:23:31 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage' 00:31:01.341 06:23:31 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --enable-lto 00:31:01.341 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:01.341 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:31:01.609 Using 'verbs' RDMA provider 00:31:17.082 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:31:29.300 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:31:29.560 Creating mk/config.mk...done. 00:31:29.560 Creating mk/cc.flags.mk...done. 00:31:29.560 Type 'make' to build. 00:31:29.560 06:24:00 -- spdk/autopackage.sh@43 -- $ make -j10 00:31:29.820 make[1]: Nothing to be done for 'all'. 00:31:36.391 The Meson build system 00:31:36.391 Version: 1.4.0 00:31:36.391 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:31:36.391 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:31:36.391 Build type: native build 00:31:36.391 Program cat found: YES (/usr/bin/cat) 00:31:36.391 Project name: DPDK 00:31:36.391 Project version: 23.11.0 00:31:36.391 C compiler for the host machine: cc (gcc 11.4.0 "cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:31:36.391 C linker for the host machine: cc ld.bfd 2.38 00:31:36.391 Host machine cpu family: x86_64 00:31:36.391 Host machine cpu: x86_64 00:31:36.391 Message: ## Building in Developer Mode ## 00:31:36.391 Program pkg-config found: YES (/usr/bin/pkg-config) 00:31:36.391 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:31:36.391 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:31:36.391 Program python3 found: YES (/usr/bin/python3) 00:31:36.391 Program cat found: YES (/usr/bin/cat) 00:31:36.391 Compiler for C supports arguments -march=native: YES 00:31:36.391 Checking for size of "void *" : 8 00:31:36.391 Checking for size of "void *" : 8 (cached) 00:31:36.391 Library m found: YES 00:31:36.391 Library numa found: YES 00:31:36.391 Has header "numaif.h" : YES 00:31:36.391 Library fdt found: NO 00:31:36.391 Library execinfo found: NO 00:31:36.391 Has header "execinfo.h" : YES 00:31:36.391 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:31:36.391 Run-time dependency libarchive found: NO (tried pkgconfig) 00:31:36.391 Run-time dependency libbsd found: NO (tried pkgconfig) 00:31:36.391 Run-time dependency jansson found: NO (tried pkgconfig) 00:31:36.391 Run-time dependency openssl found: YES 3.0.2 00:31:36.391 Run-time dependency libpcap found: NO (tried pkgconfig) 00:31:36.391 Library pcap found: NO 00:31:36.391 Compiler for C supports arguments -Wcast-qual: YES 00:31:36.391 Compiler for C supports arguments -Wdeprecated: YES 00:31:36.391 Compiler for C supports arguments -Wformat: YES 00:31:36.391 Compiler for C supports arguments -Wformat-nonliteral: YES 00:31:36.391 Compiler for C supports arguments -Wformat-security: YES 00:31:36.391 Compiler for C supports arguments -Wmissing-declarations: YES 00:31:36.391 Compiler for C supports arguments -Wmissing-prototypes: YES 00:31:36.391 Compiler for C supports arguments -Wnested-externs: YES 00:31:36.391 Compiler for C supports arguments -Wold-style-definition: YES 00:31:36.391 Compiler for C supports arguments -Wpointer-arith: YES 00:31:36.391 Compiler for C supports arguments -Wsign-compare: YES 00:31:36.391 Compiler for C supports arguments -Wstrict-prototypes: YES 00:31:36.391 Compiler for C supports arguments -Wundef: YES 00:31:36.391 Compiler for C supports arguments -Wwrite-strings: YES 00:31:36.391 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:31:36.391 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:31:36.391 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:31:36.391 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:31:36.391 Program objdump found: YES (/usr/bin/objdump) 00:31:36.391 Compiler for C supports arguments -mavx512f: YES 00:31:36.391 Checking if "AVX512 checking" compiles: YES 00:31:36.391 Fetching value of define "__SSE4_2__" : 1 00:31:36.391 Fetching value of define "__AES__" : 1 00:31:36.391 Fetching value of define "__AVX__" : 1 00:31:36.391 Fetching value of define "__AVX2__" : 1 00:31:36.391 Fetching value of define "__AVX512BW__" : 1 00:31:36.391 Fetching value of define "__AVX512CD__" : 1 00:31:36.391 Fetching value of define "__AVX512DQ__" : 1 00:31:36.391 Fetching value of define "__AVX512F__" : 1 00:31:36.391 Fetching value of define "__AVX512VL__" : 1 00:31:36.391 Fetching value of define "__PCLMUL__" : 1 00:31:36.391 Fetching value of define "__RDRND__" : 1 00:31:36.391 Fetching value of define "__RDSEED__" : 1 00:31:36.391 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:31:36.391 Fetching value of define "__znver1__" : (undefined) 00:31:36.391 Fetching value of define "__znver2__" : (undefined) 00:31:36.391 Fetching value of define "__znver3__" : (undefined) 00:31:36.391 Fetching value of define "__znver4__" : (undefined) 00:31:36.391 Compiler for C supports arguments -ffat-lto-objects: YES 00:31:36.391 Library asan found: YES 00:31:36.391 Compiler for C supports arguments -Wno-format-truncation: YES 00:31:36.391 Message: lib/log: Defining dependency "log" 00:31:36.391 Message: lib/kvargs: Defining dependency "kvargs" 00:31:36.391 Message: lib/telemetry: Defining dependency "telemetry" 00:31:36.391 Library rt found: YES 00:31:36.391 Checking for function "getentropy" : NO 00:31:36.391 Message: lib/eal: Defining dependency "eal" 00:31:36.391 Message: lib/ring: Defining dependency "ring" 00:31:36.391 Message: lib/rcu: Defining dependency "rcu" 00:31:36.391 Message: lib/mempool: Defining dependency "mempool" 00:31:36.391 Message: lib/mbuf: Defining dependency "mbuf" 00:31:36.391 Fetching value of define "__PCLMUL__" : 1 (cached) 00:31:36.391 Fetching value of define "__AVX512F__" : 1 (cached) 00:31:36.391 Fetching value of define "__AVX512BW__" : 1 (cached) 00:31:36.391 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:31:36.391 Fetching value of define "__AVX512VL__" : 1 (cached) 00:31:36.391 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:31:36.391 Compiler for C supports arguments -mpclmul: YES 00:31:36.391 Compiler for C supports arguments -maes: YES 00:31:36.391 Compiler for C supports arguments -mavx512f: YES (cached) 00:31:36.391 Compiler for C supports arguments -mavx512bw: YES 00:31:36.391 Compiler for C supports arguments -mavx512dq: YES 00:31:36.391 Compiler for C supports arguments -mavx512vl: YES 00:31:36.391 Compiler for C supports arguments -mvpclmulqdq: YES 00:31:36.391 Compiler for C supports arguments -mavx2: YES 00:31:36.391 Compiler for C supports arguments -mavx: YES 00:31:36.391 Message: lib/net: Defining dependency "net" 00:31:36.391 Message: lib/meter: Defining dependency "meter" 00:31:36.391 Message: lib/ethdev: Defining dependency "ethdev" 00:31:36.391 Message: lib/pci: Defining dependency "pci" 00:31:36.391 Message: lib/cmdline: Defining dependency "cmdline" 00:31:36.391 Message: lib/hash: Defining dependency "hash" 00:31:36.391 Message: lib/timer: Defining dependency "timer" 00:31:36.391 Message: lib/compressdev: Defining dependency "compressdev" 00:31:36.391 Message: lib/cryptodev: Defining dependency "cryptodev" 00:31:36.391 Message: lib/dmadev: Defining dependency "dmadev" 00:31:36.391 Compiler for C supports arguments -Wno-cast-qual: YES 00:31:36.391 Message: lib/power: Defining dependency "power" 00:31:36.391 Message: lib/reorder: Defining dependency "reorder" 00:31:36.391 Message: lib/security: Defining dependency "security" 00:31:36.391 Has header "linux/userfaultfd.h" : YES 00:31:36.391 Has header "linux/vduse.h" : YES 00:31:36.391 Message: lib/vhost: Defining dependency "vhost" 00:31:36.391 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:31:36.391 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:31:36.391 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:31:36.391 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:31:36.391 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:31:36.391 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:31:36.391 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:31:36.391 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:31:36.391 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:31:36.391 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:31:36.391 Program doxygen found: YES (/usr/bin/doxygen) 00:31:36.391 Configuring doxy-api-html.conf using configuration 00:31:36.391 Configuring doxy-api-man.conf using configuration 00:31:36.391 Program mandb found: YES (/usr/bin/mandb) 00:31:36.391 Program sphinx-build found: NO 00:31:36.391 Configuring rte_build_config.h using configuration 00:31:36.391 Message: 00:31:36.391 ================= 00:31:36.391 Applications Enabled 00:31:36.391 ================= 00:31:36.391 00:31:36.391 apps: 00:31:36.391 00:31:36.391 00:31:36.391 Message: 00:31:36.391 ================= 00:31:36.391 Libraries Enabled 00:31:36.391 ================= 00:31:36.391 00:31:36.391 libs: 00:31:36.391 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:31:36.391 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:31:36.391 cryptodev, dmadev, power, reorder, security, vhost, 00:31:36.391 00:31:36.391 Message: 00:31:36.391 =============== 00:31:36.391 Drivers Enabled 00:31:36.392 =============== 00:31:36.392 00:31:36.392 common: 00:31:36.392 00:31:36.392 bus: 00:31:36.392 pci, vdev, 00:31:36.392 mempool: 00:31:36.392 ring, 00:31:36.392 dma: 00:31:36.392 00:31:36.392 net: 00:31:36.392 00:31:36.392 crypto: 00:31:36.392 00:31:36.392 compress: 00:31:36.392 00:31:36.392 vdpa: 00:31:36.392 00:31:36.392 00:31:36.392 Message: 00:31:36.392 ================= 00:31:36.392 Content Skipped 00:31:36.392 ================= 00:31:36.392 00:31:36.392 apps: 00:31:36.392 dumpcap: explicitly disabled via build config 00:31:36.392 graph: explicitly disabled via build config 00:31:36.392 pdump: explicitly disabled via build config 00:31:36.392 proc-info: explicitly disabled via build config 00:31:36.392 test-acl: explicitly disabled via build config 00:31:36.392 test-bbdev: explicitly disabled via build config 00:31:36.392 test-cmdline: explicitly disabled via build config 00:31:36.392 test-compress-perf: explicitly disabled via build config 00:31:36.392 test-crypto-perf: explicitly disabled via build config 00:31:36.392 test-dma-perf: explicitly disabled via build config 00:31:36.392 test-eventdev: explicitly disabled via build config 00:31:36.392 test-fib: explicitly disabled via build config 00:31:36.392 test-flow-perf: explicitly disabled via build config 00:31:36.392 test-gpudev: explicitly disabled via build config 00:31:36.392 test-mldev: explicitly disabled via build config 00:31:36.392 test-pipeline: explicitly disabled via build config 00:31:36.392 test-pmd: explicitly disabled via build config 00:31:36.392 test-regex: explicitly disabled via build config 00:31:36.392 test-sad: explicitly disabled via build config 00:31:36.392 test-security-perf: explicitly disabled via build config 00:31:36.392 00:31:36.392 libs: 00:31:36.392 metrics: explicitly disabled via build config 00:31:36.392 acl: explicitly disabled via build config 00:31:36.392 bbdev: explicitly disabled via build config 00:31:36.392 bitratestats: explicitly disabled via build config 00:31:36.392 bpf: explicitly disabled via build config 00:31:36.392 cfgfile: explicitly disabled via build config 00:31:36.392 distributor: explicitly disabled via build config 00:31:36.392 efd: explicitly disabled via build config 00:31:36.392 eventdev: explicitly disabled via build config 00:31:36.392 dispatcher: explicitly disabled via build config 00:31:36.392 gpudev: explicitly disabled via build config 00:31:36.392 gro: explicitly disabled via build config 00:31:36.392 gso: explicitly disabled via build config 00:31:36.392 ip_frag: explicitly disabled via build config 00:31:36.392 jobstats: explicitly disabled via build config 00:31:36.392 latencystats: explicitly disabled via build config 00:31:36.392 lpm: explicitly disabled via build config 00:31:36.392 member: explicitly disabled via build config 00:31:36.392 pcapng: explicitly disabled via build config 00:31:36.392 rawdev: explicitly disabled via build config 00:31:36.392 regexdev: explicitly disabled via build config 00:31:36.392 mldev: explicitly disabled via build config 00:31:36.392 rib: explicitly disabled via build config 00:31:36.392 sched: explicitly disabled via build config 00:31:36.392 stack: explicitly disabled via build config 00:31:36.392 ipsec: explicitly disabled via build config 00:31:36.392 pdcp: explicitly disabled via build config 00:31:36.392 fib: explicitly disabled via build config 00:31:36.392 port: explicitly disabled via build config 00:31:36.392 pdump: explicitly disabled via build config 00:31:36.392 table: explicitly disabled via build config 00:31:36.392 pipeline: explicitly disabled via build config 00:31:36.392 graph: explicitly disabled via build config 00:31:36.392 node: explicitly disabled via build config 00:31:36.392 00:31:36.392 drivers: 00:31:36.392 common/cpt: not in enabled drivers build config 00:31:36.392 common/dpaax: not in enabled drivers build config 00:31:36.392 common/iavf: not in enabled drivers build config 00:31:36.392 common/idpf: not in enabled drivers build config 00:31:36.392 common/mvep: not in enabled drivers build config 00:31:36.392 common/octeontx: not in enabled drivers build config 00:31:36.392 bus/auxiliary: not in enabled drivers build config 00:31:36.392 bus/cdx: not in enabled drivers build config 00:31:36.392 bus/dpaa: not in enabled drivers build config 00:31:36.392 bus/fslmc: not in enabled drivers build config 00:31:36.392 bus/ifpga: not in enabled drivers build config 00:31:36.392 bus/platform: not in enabled drivers build config 00:31:36.392 bus/vmbus: not in enabled drivers build config 00:31:36.392 common/cnxk: not in enabled drivers build config 00:31:36.392 common/mlx5: not in enabled drivers build config 00:31:36.392 common/nfp: not in enabled drivers build config 00:31:36.392 common/qat: not in enabled drivers build config 00:31:36.392 common/sfc_efx: not in enabled drivers build config 00:31:36.392 mempool/bucket: not in enabled drivers build config 00:31:36.392 mempool/cnxk: not in enabled drivers build config 00:31:36.392 mempool/dpaa: not in enabled drivers build config 00:31:36.392 mempool/dpaa2: not in enabled drivers build config 00:31:36.392 mempool/octeontx: not in enabled drivers build config 00:31:36.392 mempool/stack: not in enabled drivers build config 00:31:36.392 dma/cnxk: not in enabled drivers build config 00:31:36.392 dma/dpaa: not in enabled drivers build config 00:31:36.392 dma/dpaa2: not in enabled drivers build config 00:31:36.392 dma/hisilicon: not in enabled drivers build config 00:31:36.392 dma/idxd: not in enabled drivers build config 00:31:36.392 dma/ioat: not in enabled drivers build config 00:31:36.392 dma/skeleton: not in enabled drivers build config 00:31:36.392 net/af_packet: not in enabled drivers build config 00:31:36.392 net/af_xdp: not in enabled drivers build config 00:31:36.392 net/ark: not in enabled drivers build config 00:31:36.392 net/atlantic: not in enabled drivers build config 00:31:36.392 net/avp: not in enabled drivers build config 00:31:36.392 net/axgbe: not in enabled drivers build config 00:31:36.392 net/bnx2x: not in enabled drivers build config 00:31:36.392 net/bnxt: not in enabled drivers build config 00:31:36.392 net/bonding: not in enabled drivers build config 00:31:36.392 net/cnxk: not in enabled drivers build config 00:31:36.392 net/cpfl: not in enabled drivers build config 00:31:36.392 net/cxgbe: not in enabled drivers build config 00:31:36.392 net/dpaa: not in enabled drivers build config 00:31:36.392 net/dpaa2: not in enabled drivers build config 00:31:36.392 net/e1000: not in enabled drivers build config 00:31:36.392 net/ena: not in enabled drivers build config 00:31:36.392 net/enetc: not in enabled drivers build config 00:31:36.392 net/enetfec: not in enabled drivers build config 00:31:36.392 net/enic: not in enabled drivers build config 00:31:36.392 net/failsafe: not in enabled drivers build config 00:31:36.392 net/fm10k: not in enabled drivers build config 00:31:36.392 net/gve: not in enabled drivers build config 00:31:36.392 net/hinic: not in enabled drivers build config 00:31:36.392 net/hns3: not in enabled drivers build config 00:31:36.392 net/i40e: not in enabled drivers build config 00:31:36.392 net/iavf: not in enabled drivers build config 00:31:36.392 net/ice: not in enabled drivers build config 00:31:36.392 net/idpf: not in enabled drivers build config 00:31:36.392 net/igc: not in enabled drivers build config 00:31:36.392 net/ionic: not in enabled drivers build config 00:31:36.392 net/ipn3ke: not in enabled drivers build config 00:31:36.392 net/ixgbe: not in enabled drivers build config 00:31:36.392 net/mana: not in enabled drivers build config 00:31:36.392 net/memif: not in enabled drivers build config 00:31:36.392 net/mlx4: not in enabled drivers build config 00:31:36.392 net/mlx5: not in enabled drivers build config 00:31:36.392 net/mvneta: not in enabled drivers build config 00:31:36.392 net/mvpp2: not in enabled drivers build config 00:31:36.392 net/netvsc: not in enabled drivers build config 00:31:36.392 net/nfb: not in enabled drivers build config 00:31:36.392 net/nfp: not in enabled drivers build config 00:31:36.392 net/ngbe: not in enabled drivers build config 00:31:36.392 net/null: not in enabled drivers build config 00:31:36.392 net/octeontx: not in enabled drivers build config 00:31:36.392 net/octeon_ep: not in enabled drivers build config 00:31:36.392 net/pcap: not in enabled drivers build config 00:31:36.392 net/pfe: not in enabled drivers build config 00:31:36.392 net/qede: not in enabled drivers build config 00:31:36.392 net/ring: not in enabled drivers build config 00:31:36.392 net/sfc: not in enabled drivers build config 00:31:36.392 net/softnic: not in enabled drivers build config 00:31:36.392 net/tap: not in enabled drivers build config 00:31:36.392 net/thunderx: not in enabled drivers build config 00:31:36.392 net/txgbe: not in enabled drivers build config 00:31:36.392 net/vdev_netvsc: not in enabled drivers build config 00:31:36.392 net/vhost: not in enabled drivers build config 00:31:36.392 net/virtio: not in enabled drivers build config 00:31:36.392 net/vmxnet3: not in enabled drivers build config 00:31:36.392 raw/*: missing internal dependency, "rawdev" 00:31:36.392 crypto/armv8: not in enabled drivers build config 00:31:36.392 crypto/bcmfs: not in enabled drivers build config 00:31:36.392 crypto/caam_jr: not in enabled drivers build config 00:31:36.392 crypto/ccp: not in enabled drivers build config 00:31:36.392 crypto/cnxk: not in enabled drivers build config 00:31:36.392 crypto/dpaa_sec: not in enabled drivers build config 00:31:36.392 crypto/dpaa2_sec: not in enabled drivers build config 00:31:36.392 crypto/ipsec_mb: not in enabled drivers build config 00:31:36.392 crypto/mlx5: not in enabled drivers build config 00:31:36.392 crypto/mvsam: not in enabled drivers build config 00:31:36.392 crypto/nitrox: not in enabled drivers build config 00:31:36.392 crypto/null: not in enabled drivers build config 00:31:36.392 crypto/octeontx: not in enabled drivers build config 00:31:36.392 crypto/openssl: not in enabled drivers build config 00:31:36.393 crypto/scheduler: not in enabled drivers build config 00:31:36.393 crypto/uadk: not in enabled drivers build config 00:31:36.393 crypto/virtio: not in enabled drivers build config 00:31:36.393 compress/isal: not in enabled drivers build config 00:31:36.393 compress/mlx5: not in enabled drivers build config 00:31:36.393 compress/octeontx: not in enabled drivers build config 00:31:36.393 compress/zlib: not in enabled drivers build config 00:31:36.393 regex/*: missing internal dependency, "regexdev" 00:31:36.393 ml/*: missing internal dependency, "mldev" 00:31:36.393 vdpa/ifc: not in enabled drivers build config 00:31:36.393 vdpa/mlx5: not in enabled drivers build config 00:31:36.393 vdpa/nfp: not in enabled drivers build config 00:31:36.393 vdpa/sfc: not in enabled drivers build config 00:31:36.393 event/*: missing internal dependency, "eventdev" 00:31:36.393 baseband/*: missing internal dependency, "bbdev" 00:31:36.393 gpu/*: missing internal dependency, "gpudev" 00:31:36.393 00:31:36.393 00:31:36.393 Build targets in project: 85 00:31:36.393 00:31:36.393 DPDK 23.11.0 00:31:36.393 00:31:36.393 User defined options 00:31:36.393 default_library : static 00:31:36.393 libdir : lib 00:31:36.393 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:31:36.393 b_lto : true 00:31:36.393 b_sanitize : address 00:31:36.393 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon 00:31:36.393 c_link_args : 00:31:36.393 cpu_instruction_set: native 00:31:36.393 disable_apps : test-eventdev,test-compress-perf,pdump,test-crypto-perf,test-pmd,test-flow-perf,test-acl,test-sad,graph,proc-info,test-bbdev,test-mldev,test-gpudev,test-fib,test-cmdline,test-security-perf,dumpcap,test-pipeline,test,test-regex,test-dma-perf 00:31:36.393 disable_libs : node,lpm,acl,pdump,cfgfile,efd,latencystats,distributor,bbdev,eventdev,port,bitratestats,pdcp,bpf,graph,member,mldev,stack,pcapng,gro,fib,table,regexdev,dispatcher,sched,ipsec,metrics,gso,jobstats,pipeline,rib,ip_frag,rawdev,gpudev 00:31:36.393 enable_docs : false 00:31:36.393 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:31:36.393 enable_kmods : false 00:31:36.393 tests : false 00:31:36.393 00:31:36.393 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:31:36.393 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:31:36.393 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:31:36.393 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:31:36.393 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:31:36.393 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:31:36.393 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:31:36.393 [6/265] Linking static target lib/librte_kvargs.a 00:31:36.393 [7/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:31:36.393 [8/265] Linking static target lib/librte_log.a 00:31:36.393 [9/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:31:36.652 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:31:36.652 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:31:36.652 [12/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:31:36.652 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:31:36.652 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:31:36.911 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:31:36.911 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:31:36.911 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:31:36.911 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:31:36.911 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:31:36.911 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:31:36.911 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:31:37.169 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:31:37.169 [23/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:31:37.169 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:31:37.169 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:31:37.170 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:31:37.170 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:31:37.428 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:31:37.428 [29/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:31:37.428 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:31:37.428 [31/265] Linking static target lib/librte_telemetry.a 00:31:37.428 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:31:37.428 [33/265] Linking target lib/librte_log.so.24.0 00:31:37.428 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:31:37.428 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:31:37.428 [36/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:31:37.687 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:31:37.687 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:31:37.687 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:31:37.687 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:31:37.687 [41/265] Linking target lib/librte_kvargs.so.24.0 00:31:37.687 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:31:37.687 [43/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:31:37.946 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:31:37.946 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:31:37.946 [46/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:31:37.946 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:31:37.946 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:31:37.946 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:31:37.946 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:31:38.204 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:31:38.204 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:31:38.204 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:31:38.204 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:31:38.204 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:31:38.204 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:31:38.463 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:31:38.463 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:31:38.463 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:31:38.463 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:31:38.463 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:31:38.463 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:31:38.463 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:31:38.463 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:31:38.463 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:31:38.463 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:31:38.722 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:31:38.722 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:31:38.722 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:31:38.722 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:31:38.722 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:31:38.722 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:31:38.722 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:31:38.722 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:31:38.722 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:31:38.985 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:31:38.985 [77/265] Linking target lib/librte_telemetry.so.24.0 00:31:38.985 [78/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:31:38.985 [79/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:31:38.985 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:31:38.985 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:31:39.274 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:31:39.274 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:31:39.274 [84/265] Linking static target lib/librte_ring.a 00:31:39.274 [85/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:31:39.274 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:31:39.274 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:31:39.540 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:31:39.540 [89/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:31:39.540 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:31:39.540 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:31:39.540 [92/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:31:39.799 [93/265] Linking static target lib/librte_eal.a 00:31:39.799 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:31:39.799 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:31:39.799 [96/265] Linking static target lib/librte_mempool.a 00:31:39.799 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:31:39.799 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:31:39.799 [99/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:31:39.799 [100/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:31:39.799 [101/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:31:39.799 [102/265] Linking static target lib/librte_rcu.a 00:31:39.799 [103/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:31:40.057 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:31:40.057 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:31:40.057 [106/265] Linking static target lib/librte_net.a 00:31:40.057 [107/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:31:40.057 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:31:40.057 [109/265] Linking static target lib/librte_meter.a 00:31:40.316 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:31:40.316 [111/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:31:40.316 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:31:40.316 [113/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:31:40.316 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:31:40.316 [115/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:31:40.574 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:31:40.574 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:31:40.832 [118/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:31:40.832 [119/265] Linking static target lib/librte_mbuf.a 00:31:40.832 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:31:40.832 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:31:41.090 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:31:41.090 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:31:41.090 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:31:41.349 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:31:41.349 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:31:41.349 [127/265] Linking static target lib/librte_pci.a 00:31:41.349 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:31:41.349 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:31:41.349 [130/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:31:41.349 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:31:41.349 [132/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:31:41.608 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:31:41.608 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:31:41.608 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:31:41.608 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:31:41.609 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:31:41.609 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:31:41.609 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:31:41.609 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:31:41.609 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:31:41.609 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:31:41.609 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:31:41.867 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:31:41.867 [145/265] Linking static target lib/librte_cmdline.a 00:31:41.867 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:31:42.125 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:31:42.125 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:31:42.125 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:31:42.125 [150/265] Linking static target lib/librte_timer.a 00:31:42.383 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:31:42.383 [152/265] Linking static target lib/librte_compressdev.a 00:31:42.383 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:31:42.383 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:31:42.383 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:31:42.642 [156/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:31:42.642 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:31:42.642 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:31:42.642 [159/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:31:42.642 [160/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:31:42.900 [161/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:31:42.900 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:31:42.900 [163/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:31:43.158 [164/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:31:43.158 [165/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:31:43.159 [166/265] Linking static target lib/librte_dmadev.a 00:31:43.159 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:31:43.418 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:31:43.418 [169/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:31:43.418 [170/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:31:43.418 [171/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:31:43.418 [172/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:31:43.418 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:31:43.676 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:31:43.934 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:31:43.934 [176/265] Linking static target lib/librte_power.a 00:31:43.934 [177/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:31:43.934 [178/265] Linking static target lib/librte_reorder.a 00:31:43.934 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:31:43.934 [180/265] Linking static target lib/librte_security.a 00:31:43.934 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:31:43.934 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:31:44.192 [183/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:31:44.192 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:31:44.450 [185/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:31:44.450 [186/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:31:44.450 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:31:44.709 [188/265] Linking static target lib/librte_ethdev.a 00:31:44.709 [189/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:31:44.709 [190/265] Linking static target lib/librte_cryptodev.a 00:31:44.967 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:31:44.967 [192/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:31:44.967 [193/265] Linking static target lib/librte_hash.a 00:31:44.967 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:31:45.226 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:31:45.484 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:31:45.484 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:31:45.484 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:31:45.742 [199/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:31:46.022 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:31:46.022 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:31:46.022 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:31:46.022 [203/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:31:46.022 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:31:46.022 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:31:46.022 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:31:46.282 [207/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:31:46.282 [208/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:31:46.282 [209/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:31:46.282 [210/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:31:46.282 [211/265] Linking static target drivers/librte_bus_vdev.a 00:31:46.282 [212/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:31:46.541 [213/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:31:46.541 [214/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:31:46.541 [215/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:31:46.541 [216/265] Linking static target drivers/librte_bus_pci.a 00:31:46.541 [217/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:31:46.800 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:31:46.800 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:31:46.800 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:31:46.800 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:31:46.800 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:31:46.800 [223/265] Linking static target drivers/librte_mempool_ring.a 00:31:47.059 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:31:47.628 [225/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:31:52.905 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:31:56.197 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:31:57.132 [228/265] Linking target lib/librte_eal.so.24.0 00:31:57.132 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:31:57.132 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:31:57.391 [230/265] Linking target lib/librte_ring.so.24.0 00:31:57.391 [231/265] Linking target lib/librte_meter.so.24.0 00:31:57.391 [232/265] Linking target lib/librte_pci.so.24.0 00:31:57.391 [233/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:31:57.391 [234/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:31:57.391 [235/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:31:57.649 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:31:57.649 [237/265] Linking target lib/librte_timer.so.24.0 00:31:57.649 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:31:57.649 [239/265] Linking target lib/librte_dmadev.so.24.0 00:31:57.908 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:31:58.167 [241/265] Linking target lib/librte_rcu.so.24.0 00:31:58.167 [242/265] Linking target lib/librte_mempool.so.24.0 00:31:58.426 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:31:58.426 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:31:58.685 [245/265] Linking target drivers/librte_bus_pci.so.24.0 00:31:58.944 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:31:59.882 [247/265] Linking target lib/librte_mbuf.so.24.0 00:31:59.882 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:32:00.451 [249/265] Linking target lib/librte_reorder.so.24.0 00:32:00.451 [250/265] Linking target lib/librte_compressdev.so.24.0 00:32:01.019 [251/265] Linking target lib/librte_net.so.24.0 00:32:01.019 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:32:01.958 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:32:01.958 [254/265] Linking target lib/librte_cmdline.so.24.0 00:32:02.216 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:32:02.475 [256/265] Linking target lib/librte_security.so.24.0 00:32:05.009 [257/265] Linking target lib/librte_hash.so.24.0 00:32:05.009 [258/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:32:11.654 [259/265] Linking target lib/librte_ethdev.so.24.0 00:32:11.654 lto-wrapper: warning: using serial compilation of 6 LTRANS jobs 00:32:11.654 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:32:12.593 [261/265] Linking target lib/librte_power.so.24.0 00:32:15.884 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:32:15.884 [263/265] Linking static target lib/librte_vhost.a 00:32:18.423 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:33:05.105 [265/265] Linking target lib/librte_vhost.so.24.0 00:33:05.105 lto-wrapper: warning: using serial compilation of 8 LTRANS jobs 00:33:05.105 INFO: autodetecting backend as ninja 00:33:05.105 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:33:05.105 CC lib/ut_mock/mock.o 00:33:05.105 CC lib/log/log.o 00:33:05.105 CC lib/log/log_flags.o 00:33:05.105 CC lib/ut/ut.o 00:33:05.105 CC lib/log/log_deprecated.o 00:33:05.105 LIB libspdk_ut_mock.a 00:33:05.105 LIB libspdk_ut.a 00:33:05.105 LIB libspdk_log.a 00:33:05.105 CC lib/util/base64.o 00:33:05.105 CC lib/util/bit_array.o 00:33:05.105 CC lib/util/cpuset.o 00:33:05.105 CC lib/ioat/ioat.o 00:33:05.105 CC lib/util/crc16.o 00:33:05.105 CC lib/util/crc32.o 00:33:05.105 CC lib/util/crc32c.o 00:33:05.105 CXX lib/trace_parser/trace.o 00:33:05.105 CC lib/dma/dma.o 00:33:05.105 CC lib/vfio_user/host/vfio_user_pci.o 00:33:05.105 CC lib/vfio_user/host/vfio_user.o 00:33:05.105 CC lib/util/crc32_ieee.o 00:33:05.105 CC lib/util/crc64.o 00:33:05.105 CC lib/util/dif.o 00:33:05.105 CC lib/util/fd.o 00:33:05.105 LIB libspdk_dma.a 00:33:05.105 CC lib/util/file.o 00:33:05.105 CC lib/util/hexlify.o 00:33:05.105 LIB libspdk_ioat.a 00:33:05.105 CC lib/util/iov.o 00:33:05.105 CC lib/util/math.o 00:33:05.105 LIB libspdk_vfio_user.a 00:33:05.105 CC lib/util/pipe.o 00:33:05.105 CC lib/util/strerror_tls.o 00:33:05.105 CC lib/util/string.o 00:33:05.105 CC lib/util/uuid.o 00:33:05.105 CC lib/util/fd_group.o 00:33:05.105 CC lib/util/xor.o 00:33:05.105 CC lib/util/zipf.o 00:33:05.105 LIB libspdk_util.a 00:33:05.105 LIB libspdk_trace_parser.a 00:33:05.105 CC lib/json/json_util.o 00:33:05.105 CC lib/json/json_parse.o 00:33:05.105 CC lib/json/json_write.o 00:33:05.105 CC lib/idxd/idxd.o 00:33:05.105 CC lib/idxd/idxd_user.o 00:33:05.105 CC lib/conf/conf.o 00:33:05.105 CC lib/rdma/common.o 00:33:05.105 CC lib/rdma/rdma_verbs.o 00:33:05.105 CC lib/vmd/vmd.o 00:33:05.105 CC lib/env_dpdk/env.o 00:33:05.105 CC lib/env_dpdk/memory.o 00:33:05.105 CC lib/env_dpdk/pci.o 00:33:05.105 CC lib/env_dpdk/init.o 00:33:05.105 LIB libspdk_json.a 00:33:05.105 CC lib/env_dpdk/threads.o 00:33:05.105 LIB libspdk_rdma.a 00:33:05.105 LIB libspdk_conf.a 00:33:05.105 CC lib/env_dpdk/pci_ioat.o 00:33:05.105 CC lib/env_dpdk/pci_virtio.o 00:33:05.105 CC lib/env_dpdk/pci_vmd.o 00:33:05.105 LIB libspdk_idxd.a 00:33:05.105 CC lib/vmd/led.o 00:33:05.105 CC lib/env_dpdk/pci_idxd.o 00:33:05.105 CC lib/env_dpdk/pci_event.o 00:33:05.105 CC lib/jsonrpc/jsonrpc_server.o 00:33:05.105 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:33:05.105 CC lib/env_dpdk/sigbus_handler.o 00:33:05.105 CC lib/jsonrpc/jsonrpc_client.o 00:33:05.105 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:33:05.105 LIB libspdk_vmd.a 00:33:05.105 CC lib/env_dpdk/pci_dpdk.o 00:33:05.105 CC lib/env_dpdk/pci_dpdk_2207.o 00:33:05.105 CC lib/env_dpdk/pci_dpdk_2211.o 00:33:05.105 LIB libspdk_jsonrpc.a 00:33:05.105 CC lib/rpc/rpc.o 00:33:05.105 LIB libspdk_env_dpdk.a 00:33:05.105 LIB libspdk_rpc.a 00:33:05.105 CC lib/trace/trace_rpc.o 00:33:05.105 CC lib/trace/trace_flags.o 00:33:05.105 CC lib/trace/trace.o 00:33:05.105 CC lib/notify/notify_rpc.o 00:33:05.105 CC lib/notify/notify.o 00:33:05.105 CC lib/sock/sock_rpc.o 00:33:05.105 CC lib/sock/sock.o 00:33:05.105 LIB libspdk_trace.a 00:33:05.105 LIB libspdk_notify.a 00:33:05.105 LIB libspdk_sock.a 00:33:05.105 CC lib/thread/thread.o 00:33:05.105 CC lib/thread/iobuf.o 00:33:05.105 CC lib/nvme/nvme_ctrlr_cmd.o 00:33:05.105 CC lib/nvme/nvme_ns_cmd.o 00:33:05.105 CC lib/nvme/nvme_ctrlr.o 00:33:05.105 CC lib/nvme/nvme_fabric.o 00:33:05.105 CC lib/nvme/nvme_pcie.o 00:33:05.105 CC lib/nvme/nvme_pcie_common.o 00:33:05.105 CC lib/nvme/nvme_ns.o 00:33:05.105 CC lib/nvme/nvme_qpair.o 00:33:05.105 CC lib/nvme/nvme.o 00:33:05.105 LIB libspdk_thread.a 00:33:05.105 CC lib/nvme/nvme_quirks.o 00:33:05.105 CC lib/nvme/nvme_transport.o 00:33:05.105 CC lib/accel/accel.o 00:33:05.105 CC lib/blob/blobstore.o 00:33:05.105 CC lib/init/json_config.o 00:33:05.105 CC lib/blob/request.o 00:33:05.105 CC lib/blob/zeroes.o 00:33:05.105 CC lib/blob/blob_bs_dev.o 00:33:05.105 CC lib/init/subsystem.o 00:33:05.105 CC lib/nvme/nvme_discovery.o 00:33:05.105 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:33:05.105 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:33:05.105 CC lib/virtio/virtio.o 00:33:05.105 CC lib/virtio/virtio_vhost_user.o 00:33:05.105 CC lib/init/subsystem_rpc.o 00:33:05.105 CC lib/nvme/nvme_tcp.o 00:33:05.105 CC lib/nvme/nvme_opal.o 00:33:05.105 CC lib/accel/accel_rpc.o 00:33:05.105 CC lib/virtio/virtio_vfio_user.o 00:33:05.105 CC lib/init/rpc.o 00:33:05.105 CC lib/virtio/virtio_pci.o 00:33:05.105 LIB libspdk_init.a 00:33:05.105 CC lib/accel/accel_sw.o 00:33:05.105 CC lib/nvme/nvme_io_msg.o 00:33:05.105 CC lib/nvme/nvme_poll_group.o 00:33:05.105 CC lib/nvme/nvme_zns.o 00:33:05.105 LIB libspdk_virtio.a 00:33:05.105 CC lib/nvme/nvme_cuse.o 00:33:05.105 CC lib/nvme/nvme_vfio_user.o 00:33:05.105 CC lib/event/app.o 00:33:05.105 CC lib/event/reactor.o 00:33:05.105 LIB libspdk_accel.a 00:33:05.105 CC lib/nvme/nvme_rdma.o 00:33:05.105 CC lib/event/log_rpc.o 00:33:05.105 CC lib/event/app_rpc.o 00:33:05.105 CC lib/event/scheduler_static.o 00:33:05.105 LIB libspdk_event.a 00:33:05.105 CC lib/bdev/bdev.o 00:33:05.105 CC lib/bdev/bdev_rpc.o 00:33:05.105 CC lib/bdev/bdev_zone.o 00:33:05.105 CC lib/bdev/part.o 00:33:05.105 CC lib/bdev/scsi_nvme.o 00:33:05.105 LIB libspdk_blob.a 00:33:05.105 CC lib/lvol/lvol.o 00:33:05.105 CC lib/blobfs/blobfs.o 00:33:05.105 CC lib/blobfs/tree.o 00:33:05.105 LIB libspdk_nvme.a 00:33:05.105 LIB libspdk_blobfs.a 00:33:05.105 LIB libspdk_lvol.a 00:33:05.105 LIB libspdk_bdev.a 00:33:05.105 CC lib/scsi/dev.o 00:33:05.105 CC lib/scsi/lun.o 00:33:05.105 CC lib/scsi/port.o 00:33:05.105 CC lib/scsi/scsi.o 00:33:05.105 CC lib/scsi/scsi_pr.o 00:33:05.105 CC lib/nbd/nbd.o 00:33:05.105 CC lib/scsi/scsi_bdev.o 00:33:05.105 CC lib/scsi/scsi_rpc.o 00:33:05.105 CC lib/nvmf/ctrlr.o 00:33:05.105 CC lib/ftl/ftl_core.o 00:33:05.105 CC lib/ftl/ftl_init.o 00:33:05.105 CC lib/ftl/ftl_layout.o 00:33:05.105 CC lib/ftl/ftl_debug.o 00:33:05.105 CC lib/ftl/ftl_io.o 00:33:05.105 CC lib/ftl/ftl_sb.o 00:33:05.105 CC lib/ftl/ftl_l2p.o 00:33:05.105 CC lib/nbd/nbd_rpc.o 00:33:05.105 CC lib/scsi/task.o 00:33:05.105 CC lib/ftl/ftl_l2p_flat.o 00:33:05.105 CC lib/nvmf/ctrlr_discovery.o 00:33:05.105 CC lib/nvmf/ctrlr_bdev.o 00:33:05.105 CC lib/ftl/ftl_nv_cache.o 00:33:05.105 CC lib/ftl/ftl_band.o 00:33:05.105 CC lib/nvmf/subsystem.o 00:33:05.105 LIB libspdk_nbd.a 00:33:05.105 CC lib/nvmf/nvmf.o 00:33:05.105 CC lib/nvmf/nvmf_rpc.o 00:33:05.105 LIB libspdk_scsi.a 00:33:05.105 CC lib/nvmf/transport.o 00:33:05.105 CC lib/nvmf/tcp.o 00:33:05.105 CC lib/nvmf/rdma.o 00:33:05.105 CC lib/ftl/ftl_band_ops.o 00:33:05.105 CC lib/iscsi/conn.o 00:33:05.105 CC lib/vhost/vhost.o 00:33:05.105 CC lib/iscsi/init_grp.o 00:33:05.105 CC lib/iscsi/iscsi.o 00:33:05.105 CC lib/iscsi/md5.o 00:33:05.105 CC lib/iscsi/param.o 00:33:05.105 CC lib/ftl/ftl_writer.o 00:33:05.105 CC lib/ftl/ftl_rq.o 00:33:05.105 CC lib/ftl/ftl_reloc.o 00:33:05.105 CC lib/ftl/ftl_l2p_cache.o 00:33:05.105 CC lib/ftl/ftl_p2l.o 00:33:05.105 CC lib/vhost/vhost_rpc.o 00:33:05.105 CC lib/ftl/mngt/ftl_mngt.o 00:33:05.105 CC lib/vhost/vhost_scsi.o 00:33:05.105 CC lib/iscsi/portal_grp.o 00:33:05.105 CC lib/iscsi/tgt_node.o 00:33:05.105 CC lib/iscsi/iscsi_subsystem.o 00:33:05.105 CC lib/iscsi/iscsi_rpc.o 00:33:05.105 CC lib/iscsi/task.o 00:33:05.105 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:33:05.105 LIB libspdk_nvmf.a 00:33:05.105 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:33:05.365 CC lib/vhost/vhost_blk.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_startup.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_md.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_misc.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:33:05.365 CC lib/vhost/rte_vhost_user.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:33:05.365 LIB libspdk_iscsi.a 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_band.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:33:05.365 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:33:05.365 CC lib/ftl/utils/ftl_conf.o 00:33:05.365 CC lib/ftl/utils/ftl_md.o 00:33:05.625 CC lib/ftl/utils/ftl_mempool.o 00:33:05.625 CC lib/ftl/utils/ftl_bitmap.o 00:33:05.625 CC lib/ftl/utils/ftl_property.o 00:33:05.625 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:33:05.625 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:33:05.625 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:33:05.625 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:33:05.625 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:33:05.625 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:33:05.625 CC lib/ftl/upgrade/ftl_sb_v3.o 00:33:05.625 CC lib/ftl/upgrade/ftl_sb_v5.o 00:33:05.625 CC lib/ftl/nvc/ftl_nvc_dev.o 00:33:05.884 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:33:05.884 CC lib/ftl/base/ftl_base_dev.o 00:33:05.884 CC lib/ftl/base/ftl_base_bdev.o 00:33:05.884 LIB libspdk_ftl.a 00:33:05.884 LIB libspdk_vhost.a 00:33:06.142 CC module/env_dpdk/env_dpdk_rpc.o 00:33:06.143 CC module/accel/dsa/accel_dsa.o 00:33:06.143 CC module/accel/error/accel_error.o 00:33:06.143 CC module/scheduler/dynamic/scheduler_dynamic.o 00:33:06.143 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:33:06.143 CC module/blob/bdev/blob_bdev.o 00:33:06.143 CC module/accel/iaa/accel_iaa.o 00:33:06.143 CC module/scheduler/gscheduler/gscheduler.o 00:33:06.143 CC module/sock/posix/posix.o 00:33:06.143 CC module/accel/ioat/accel_ioat.o 00:33:06.143 LIB libspdk_env_dpdk_rpc.a 00:33:06.143 CC module/accel/ioat/accel_ioat_rpc.o 00:33:06.401 LIB libspdk_scheduler_dpdk_governor.a 00:33:06.401 CC module/accel/error/accel_error_rpc.o 00:33:06.401 LIB libspdk_scheduler_gscheduler.a 00:33:06.401 LIB libspdk_scheduler_dynamic.a 00:33:06.401 CC module/accel/iaa/accel_iaa_rpc.o 00:33:06.401 CC module/accel/dsa/accel_dsa_rpc.o 00:33:06.401 LIB libspdk_blob_bdev.a 00:33:06.401 LIB libspdk_accel_ioat.a 00:33:06.401 LIB libspdk_accel_error.a 00:33:06.401 LIB libspdk_accel_iaa.a 00:33:06.401 LIB libspdk_accel_dsa.a 00:33:06.401 CC module/blobfs/bdev/blobfs_bdev.o 00:33:06.401 CC module/bdev/error/vbdev_error.o 00:33:06.401 CC module/bdev/null/bdev_null.o 00:33:06.401 CC module/bdev/lvol/vbdev_lvol.o 00:33:06.401 CC module/bdev/gpt/gpt.o 00:33:06.401 CC module/bdev/delay/vbdev_delay.o 00:33:06.401 CC module/bdev/malloc/bdev_malloc.o 00:33:06.401 CC module/bdev/nvme/bdev_nvme.o 00:33:06.401 CC module/bdev/passthru/vbdev_passthru.o 00:33:06.660 LIB libspdk_sock_posix.a 00:33:06.660 CC module/bdev/malloc/bdev_malloc_rpc.o 00:33:06.660 CC module/bdev/gpt/vbdev_gpt.o 00:33:06.660 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:33:06.660 CC module/bdev/error/vbdev_error_rpc.o 00:33:06.660 CC module/bdev/null/bdev_null_rpc.o 00:33:06.660 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:33:06.660 CC module/bdev/nvme/bdev_nvme_rpc.o 00:33:06.660 CC module/bdev/delay/vbdev_delay_rpc.o 00:33:06.660 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:33:06.660 LIB libspdk_bdev_malloc.a 00:33:06.660 LIB libspdk_blobfs_bdev.a 00:33:06.660 LIB libspdk_bdev_error.a 00:33:06.660 LIB libspdk_bdev_null.a 00:33:06.920 LIB libspdk_bdev_gpt.a 00:33:06.920 LIB libspdk_bdev_passthru.a 00:33:06.920 LIB libspdk_bdev_delay.a 00:33:06.920 CC module/bdev/nvme/nvme_rpc.o 00:33:06.920 CC module/bdev/raid/bdev_raid.o 00:33:06.920 CC module/bdev/raid/bdev_raid_rpc.o 00:33:06.920 CC module/bdev/split/vbdev_split.o 00:33:06.920 CC module/bdev/zone_block/vbdev_zone_block.o 00:33:06.920 CC module/bdev/ftl/bdev_ftl.o 00:33:06.920 CC module/bdev/aio/bdev_aio.o 00:33:06.920 LIB libspdk_bdev_lvol.a 00:33:06.920 CC module/bdev/aio/bdev_aio_rpc.o 00:33:06.920 CC module/bdev/split/vbdev_split_rpc.o 00:33:06.920 CC module/bdev/nvme/bdev_mdns_client.o 00:33:07.180 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:33:07.180 CC module/bdev/iscsi/bdev_iscsi.o 00:33:07.180 CC module/bdev/ftl/bdev_ftl_rpc.o 00:33:07.180 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:33:07.180 LIB libspdk_bdev_aio.a 00:33:07.180 CC module/bdev/virtio/bdev_virtio_scsi.o 00:33:07.180 CC module/bdev/virtio/bdev_virtio_blk.o 00:33:07.180 LIB libspdk_bdev_split.a 00:33:07.180 CC module/bdev/nvme/vbdev_opal.o 00:33:07.180 CC module/bdev/nvme/vbdev_opal_rpc.o 00:33:07.180 LIB libspdk_bdev_zone_block.a 00:33:07.180 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:33:07.180 CC module/bdev/raid/bdev_raid_sb.o 00:33:07.180 LIB libspdk_bdev_ftl.a 00:33:07.180 CC module/bdev/raid/raid0.o 00:33:07.180 LIB libspdk_bdev_iscsi.a 00:33:07.180 CC module/bdev/raid/raid1.o 00:33:07.180 CC module/bdev/virtio/bdev_virtio_rpc.o 00:33:07.180 CC module/bdev/raid/concat.o 00:33:07.440 LIB libspdk_bdev_virtio.a 00:33:07.440 LIB libspdk_bdev_nvme.a 00:33:07.440 LIB libspdk_bdev_raid.a 00:33:07.699 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:33:07.699 CC module/event/subsystems/iobuf/iobuf.o 00:33:07.699 CC module/event/subsystems/scheduler/scheduler.o 00:33:07.699 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:33:07.699 CC module/event/subsystems/vmd/vmd_rpc.o 00:33:07.699 CC module/event/subsystems/vmd/vmd.o 00:33:07.699 CC module/event/subsystems/sock/sock.o 00:33:07.699 LIB libspdk_event_sock.a 00:33:07.699 LIB libspdk_event_iobuf.a 00:33:07.699 LIB libspdk_event_vmd.a 00:33:07.699 LIB libspdk_event_scheduler.a 00:33:07.699 LIB libspdk_event_vhost_blk.a 00:33:07.959 CC module/event/subsystems/accel/accel.o 00:33:07.959 LIB libspdk_event_accel.a 00:33:08.218 CC module/event/subsystems/bdev/bdev.o 00:33:08.478 LIB libspdk_event_bdev.a 00:33:08.478 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:33:08.478 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:33:08.478 CC module/event/subsystems/scsi/scsi.o 00:33:08.478 CC module/event/subsystems/nbd/nbd.o 00:33:08.737 LIB libspdk_event_scsi.a 00:33:08.737 LIB libspdk_event_nbd.a 00:33:08.737 LIB libspdk_event_nvmf.a 00:33:08.737 CC module/event/subsystems/iscsi/iscsi.o 00:33:08.737 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:33:08.997 LIB libspdk_event_vhost_scsi.a 00:33:08.997 LIB libspdk_event_iscsi.a 00:33:09.257 CXX app/trace/trace.o 00:33:09.257 TEST_HEADER include/spdk/config.h 00:33:09.257 CXX test/cpp_headers/accel.o 00:33:09.257 CC examples/accel/perf/accel_perf.o 00:33:09.257 CC test/event/event_perf/event_perf.o 00:33:09.257 CC test/bdev/bdevio/bdevio.o 00:33:09.257 CC test/accel/dif/dif.o 00:33:09.257 CC test/dma/test_dma/test_dma.o 00:33:09.257 CC test/env/mem_callbacks/mem_callbacks.o 00:33:09.257 CC test/blobfs/mkfs/mkfs.o 00:33:09.257 CC test/app/bdev_svc/bdev_svc.o 00:33:09.516 CXX test/cpp_headers/accel_module.o 00:33:09.516 LINK event_perf 00:33:09.516 LINK mkfs 00:33:09.516 LINK bdev_svc 00:33:09.516 CXX test/cpp_headers/assert.o 00:33:09.516 LINK dif 00:33:09.516 LINK spdk_trace 00:33:09.516 LINK bdevio 00:33:09.516 LINK accel_perf 00:33:09.516 LINK test_dma 00:33:09.776 CXX test/cpp_headers/barrier.o 00:33:09.776 LINK mem_callbacks 00:33:09.776 CXX test/cpp_headers/base64.o 00:33:10.035 CXX test/cpp_headers/bdev.o 00:33:10.295 CXX test/cpp_headers/bdev_module.o 00:33:10.864 CXX test/cpp_headers/bdev_zone.o 00:33:11.433 CXX test/cpp_headers/bit_array.o 00:33:11.999 CXX test/cpp_headers/bit_pool.o 00:33:12.258 CXX test/cpp_headers/blob.o 00:33:12.517 CXX test/cpp_headers/blob_bdev.o 00:33:13.454 CXX test/cpp_headers/blobfs.o 00:33:14.023 CXX test/cpp_headers/blobfs_bdev.o 00:33:14.592 CXX test/cpp_headers/conf.o 00:33:15.194 CXX test/cpp_headers/config.o 00:33:15.194 CXX test/cpp_headers/cpuset.o 00:33:16.132 CXX test/cpp_headers/crc16.o 00:33:16.392 CXX test/cpp_headers/crc32.o 00:33:16.961 CXX test/cpp_headers/crc64.o 00:33:17.220 CC app/trace_record/trace_record.o 00:33:17.480 CXX test/cpp_headers/dif.o 00:33:18.049 CXX test/cpp_headers/dma.o 00:33:18.049 LINK spdk_trace_record 00:33:18.618 CXX test/cpp_headers/endian.o 00:33:19.187 CXX test/cpp_headers/env.o 00:33:20.125 CXX test/cpp_headers/env_dpdk.o 00:33:21.063 CXX test/cpp_headers/event.o 00:33:22.002 CXX test/cpp_headers/fd.o 00:33:22.938 CXX test/cpp_headers/fd_group.o 00:33:23.877 CXX test/cpp_headers/file.o 00:33:24.816 CXX test/cpp_headers/ftl.o 00:33:26.194 CXX test/cpp_headers/gpt_spec.o 00:33:27.132 CC test/env/vtophys/vtophys.o 00:33:27.132 CXX test/cpp_headers/hexlify.o 00:33:27.700 LINK vtophys 00:33:28.269 CXX test/cpp_headers/histogram_data.o 00:33:29.207 CXX test/cpp_headers/idxd.o 00:33:29.774 CC test/event/reactor/reactor.o 00:33:30.032 CXX test/cpp_headers/idxd_spec.o 00:33:30.599 LINK reactor 00:33:31.168 CXX test/cpp_headers/init.o 00:33:32.542 CXX test/cpp_headers/ioat.o 00:33:32.543 CXX test/cpp_headers/ioat_spec.o 00:33:33.477 CXX test/cpp_headers/iscsi_spec.o 00:33:34.412 CXX test/cpp_headers/json.o 00:33:35.790 CXX test/cpp_headers/jsonrpc.o 00:33:36.727 CXX test/cpp_headers/likely.o 00:33:37.664 CXX test/cpp_headers/log.o 00:33:39.043 CXX test/cpp_headers/lvol.o 00:33:39.043 CC app/nvmf_tgt/nvmf_main.o 00:33:40.422 LINK nvmf_tgt 00:33:40.422 CXX test/cpp_headers/memory.o 00:33:41.414 CXX test/cpp_headers/mmio.o 00:33:42.367 CC examples/bdev/hello_world/hello_bdev.o 00:33:42.626 CXX test/cpp_headers/nbd.o 00:33:42.886 CXX test/cpp_headers/notify.o 00:33:43.454 LINK hello_bdev 00:33:43.713 CXX test/cpp_headers/nvme.o 00:33:44.651 CXX test/cpp_headers/nvme_intel.o 00:33:46.032 CXX test/cpp_headers/nvme_ocssd.o 00:33:47.410 CXX test/cpp_headers/nvme_ocssd_spec.o 00:33:47.978 CXX test/cpp_headers/nvme_spec.o 00:33:48.914 CXX test/cpp_headers/nvme_zns.o 00:33:50.293 CXX test/cpp_headers/nvmf.o 00:33:51.231 CXX test/cpp_headers/nvmf_cmd.o 00:33:52.609 CXX test/cpp_headers/nvmf_fc_spec.o 00:33:53.987 CXX test/cpp_headers/nvmf_spec.o 00:33:54.246 CXX test/cpp_headers/nvmf_transport.o 00:33:55.182 CXX test/cpp_headers/opal.o 00:33:56.120 CXX test/cpp_headers/opal_spec.o 00:33:56.378 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:33:56.947 CXX test/cpp_headers/pci_ids.o 00:33:57.206 LINK env_dpdk_post_init 00:33:57.774 CXX test/cpp_headers/pipe.o 00:33:58.713 CXX test/cpp_headers/queue.o 00:33:58.972 CXX test/cpp_headers/reduce.o 00:33:59.910 CXX test/cpp_headers/rpc.o 00:34:00.848 CXX test/cpp_headers/scheduler.o 00:34:01.786 CXX test/cpp_headers/scsi.o 00:34:03.163 CXX test/cpp_headers/scsi_spec.o 00:34:03.731 CXX test/cpp_headers/sock.o 00:34:05.110 CXX test/cpp_headers/stdinc.o 00:34:06.510 CXX test/cpp_headers/string.o 00:34:07.090 CC test/event/reactor_perf/reactor_perf.o 00:34:07.660 CXX test/cpp_headers/thread.o 00:34:08.228 LINK reactor_perf 00:34:09.166 CXX test/cpp_headers/trace.o 00:34:10.546 CXX test/cpp_headers/trace_parser.o 00:34:11.932 CXX test/cpp_headers/tree.o 00:34:11.932 CXX test/cpp_headers/ublk.o 00:34:13.308 CXX test/cpp_headers/util.o 00:34:14.686 CXX test/cpp_headers/uuid.o 00:34:15.624 CXX test/cpp_headers/version.o 00:34:15.884 CXX test/cpp_headers/vfio_user_pci.o 00:34:17.791 CXX test/cpp_headers/vfio_user_spec.o 00:34:19.168 CXX test/cpp_headers/vhost.o 00:34:20.548 CXX test/cpp_headers/vmd.o 00:34:22.453 CXX test/cpp_headers/xor.o 00:34:23.832 CXX test/cpp_headers/zipf.o 00:34:25.739 CC test/env/memory/memory_ut.o 00:34:31.016 LINK memory_ut 00:34:43.224 CC examples/bdev/bdevperf/bdevperf.o 00:34:46.595 LINK bdevperf 00:34:51.872 CC test/env/pci/pci_ut.o 00:34:53.252 LINK pci_ut 00:34:54.190 CC test/event/app_repeat/app_repeat.o 00:34:55.129 LINK app_repeat 00:35:10.026 CC test/event/scheduler/scheduler.o 00:35:10.026 LINK scheduler 00:35:10.026 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:35:10.026 CC app/iscsi_tgt/iscsi_tgt.o 00:35:10.594 LINK iscsi_tgt 00:35:10.594 LINK nvme_fuzz 00:35:11.163 CC examples/blob/hello_world/hello_blob.o 00:35:11.163 CC examples/ioat/perf/perf.o 00:35:12.099 LINK hello_blob 00:35:12.099 LINK ioat_perf 00:35:12.358 CC examples/ioat/verify/verify.o 00:35:12.927 LINK verify 00:35:45.010 CC examples/blob/cli/blobcli.o 00:35:45.010 LINK blobcli 00:35:49.205 CC app/spdk_tgt/spdk_tgt.o 00:35:49.778 LINK spdk_tgt 00:35:49.778 CC test/lvol/esnap/esnap.o 00:35:52.319 CC test/nvme/aer/aer.o 00:35:53.256 LINK aer 00:36:01.382 LINK esnap 00:36:06.659 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:36:08.565 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:36:09.134 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:36:11.040 LINK vhost_fuzz 00:36:11.299 LINK iscsi_fuzz 00:36:43.380 CC test/nvme/reset/reset.o 00:36:43.947 LINK reset 00:36:56.149 CC app/spdk_lspci/spdk_lspci.o 00:36:56.408 LINK spdk_lspci 00:37:00.601 CC test/nvme/sgl/sgl.o 00:37:01.978 LINK sgl 00:37:03.878 CC test/nvme/e2edp/nvme_dp.o 00:37:04.813 LINK nvme_dp 00:37:22.903 CC test/app/histogram_perf/histogram_perf.o 00:37:22.903 LINK histogram_perf 00:37:25.438 CC test/nvme/overhead/overhead.o 00:37:27.344 LINK overhead 00:37:33.911 CC test/nvme/err_injection/err_injection.o 00:37:34.170 LINK err_injection 00:37:42.314 CC test/nvme/startup/startup.o 00:37:42.314 LINK startup 00:37:42.881 CC app/spdk_nvme_perf/perf.o 00:37:46.167 LINK spdk_nvme_perf 00:37:52.731 CC test/rpc_client/rpc_client_test.o 00:37:52.731 LINK rpc_client_test 00:37:54.105 CC test/app/jsoncat/jsoncat.o 00:37:55.049 LINK jsoncat 00:37:56.425 CC test/app/stub/stub.o 00:37:57.359 LINK stub 00:37:58.294 CC test/nvme/reserve/reserve.o 00:37:59.671 LINK reserve 00:38:11.871 CC examples/nvme/hello_world/hello_world.o 00:38:12.127 LINK hello_world 00:38:20.263 CC examples/nvme/reconnect/reconnect.o 00:38:21.639 LINK reconnect 00:38:22.575 CC examples/nvme/nvme_manage/nvme_manage.o 00:38:24.478 LINK nvme_manage 00:38:27.013 CC test/nvme/simple_copy/simple_copy.o 00:38:28.388 LINK simple_copy 00:38:30.293 CC test/nvme/connect_stress/connect_stress.o 00:38:31.261 LINK connect_stress 00:38:37.819 CC app/spdk_nvme_identify/identify.o 00:38:42.053 LINK spdk_nvme_identify 00:38:52.032 CC examples/nvme/arbitration/arbitration.o 00:38:52.598 LINK arbitration 00:38:54.501 CC app/spdk_nvme_discover/discovery_aer.o 00:38:55.879 LINK spdk_nvme_discover 00:39:01.151 CC app/spdk_top/spdk_top.o 00:39:02.599 LINK spdk_top 00:39:03.975 CC test/nvme/boot_partition/boot_partition.o 00:39:04.543 CC test/nvme/compliance/nvme_compliance.o 00:39:04.802 LINK boot_partition 00:39:06.178 LINK nvme_compliance 00:39:11.448 CC test/nvme/fused_ordering/fused_ordering.o 00:39:11.705 LINK fused_ordering 00:39:14.240 CC test/nvme/doorbell_aers/doorbell_aers.o 00:39:15.176 LINK doorbell_aers 00:39:18.461 CC examples/nvme/hotplug/hotplug.o 00:39:19.395 LINK hotplug 00:39:20.771 CC examples/nvme/cmb_copy/cmb_copy.o 00:39:22.192 LINK cmb_copy 00:39:25.477 CC examples/nvme/abort/abort.o 00:39:26.855 LINK abort 00:39:34.978 CC test/thread/poller_perf/poller_perf.o 00:39:35.237 LINK poller_perf 00:39:41.804 CC test/thread/lock/spdk_lock.o 00:39:43.708 CC app/vhost/vhost.o 00:39:45.099 LINK vhost 00:39:45.667 LINK spdk_lock 00:39:57.877 CC app/spdk_dd/spdk_dd.o 00:39:57.877 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:39:57.877 LINK pmr_persistence 00:39:57.877 LINK spdk_dd 00:40:03.227 CC test/nvme/fdp/fdp.o 00:40:03.227 CC examples/sock/hello_world/hello_sock.o 00:40:03.227 LINK fdp 00:40:03.227 LINK hello_sock 00:40:07.419 CC app/fio/nvme/fio_plugin.o 00:40:08.357 LINK spdk_nvme 00:40:08.357 CC app/fio/bdev/fio_plugin.o 00:40:08.357 CC test/nvme/cuse/cuse.o 00:40:10.297 LINK spdk_bdev 00:40:11.230 LINK cuse 00:40:14.513 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:40:15.080 LINK histogram_ut 00:40:16.985 CC test/unit/lib/accel/accel.c/accel_ut.o 00:40:18.365 CC examples/vmd/lsvmd/lsvmd.o 00:40:19.300 LINK lsvmd 00:40:19.300 CC examples/nvmf/nvmf/nvmf.o 00:40:21.203 LINK nvmf 00:40:21.768 LINK accel_ut 00:40:39.869 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:40:39.869 CC test/unit/lib/bdev/part.c/part_ut.o 00:40:42.451 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:40:43.386 LINK scsi_nvme_ut 00:40:46.676 LINK part_ut 00:40:46.935 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:40:48.312 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:40:48.571 LINK gpt_ut 00:40:51.859 LINK vbdev_lvol_ut 00:40:52.117 LINK bdev_ut 00:40:58.749 CC examples/vmd/led/led.o 00:40:58.749 LINK led 00:41:01.279 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:41:05.464 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:41:10.733 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:41:10.733 LINK bdev_ut 00:41:10.733 LINK bdev_raid_ut 00:41:12.107 LINK bdev_raid_sb_ut 00:41:22.085 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:41:23.461 LINK concat_ut 00:41:28.757 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:41:29.695 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:41:29.695 LINK bdev_zone_ut 00:41:31.602 LINK raid1_ut 00:41:31.862 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:41:33.768 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:41:34.337 LINK vbdev_zone_block_ut 00:41:35.275 LINK blob_bdev_ut 00:41:36.263 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:41:42.958 CC examples/util/zipf/zipf.o 00:41:43.217 LINK zipf 00:41:45.751 CC test/unit/lib/blob/blob.c/blob_ut.o 00:41:46.010 CC examples/thread/thread/thread_ex.o 00:41:46.576 LINK bdev_nvme_ut 00:41:46.835 LINK thread 00:41:47.771 CC examples/idxd/perf/perf.o 00:41:48.707 LINK idxd_perf 00:41:56.907 CC examples/interrupt_tgt/interrupt_tgt.o 00:41:57.475 LINK interrupt_tgt 00:41:58.410 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:41:58.669 LINK tree_ut 00:41:58.669 LINK blob_ut 00:42:01.227 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:42:01.486 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:42:02.051 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:42:02.987 LINK blobfs_bdev_ut 00:42:03.554 LINK blobfs_async_ut 00:42:04.122 LINK blobfs_sync_ut 00:42:07.411 CC test/unit/lib/dma/dma.c/dma_ut.o 00:42:08.822 LINK dma_ut 00:42:09.806 CC test/unit/lib/event/app.c/app_ut.o 00:42:12.341 LINK app_ut 00:42:16.534 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:42:17.912 LINK ioat_ut 00:42:21.260 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:42:21.519 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:42:21.519 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:42:21.776 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:42:22.034 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:42:22.970 LINK jsonrpc_server_ut 00:42:22.970 LINK reactor_ut 00:42:23.538 LINK json_util_ut 00:42:24.106 LINK conn_ut 00:42:26.643 LINK json_parse_ut 00:42:33.265 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:42:34.201 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:42:34.201 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:42:34.460 LINK json_write_ut 00:42:35.028 CC test/unit/lib/log/log.c/log_ut.o 00:42:35.286 LINK init_grp_ut 00:42:35.853 LINK log_ut 00:42:38.417 LINK iscsi_ut 00:42:38.985 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:42:39.922 CC test/unit/lib/iscsi/param.c/param_ut.o 00:42:39.922 CC test/unit/lib/notify/notify.c/notify_ut.o 00:42:40.859 LINK notify_ut 00:42:41.119 LINK param_ut 00:42:41.378 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:42:42.779 LINK lvol_ut 00:42:44.159 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:42:44.159 LINK nvme_ut 00:42:44.418 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:42:45.795 LINK portal_grp_ut 00:42:46.053 LINK tgt_node_ut 00:42:47.959 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:42:48.217 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:42:50.752 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:42:50.752 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:42:50.752 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:42:51.688 LINK nvme_ctrlr_ut 00:42:51.688 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:42:51.688 LINK tcp_ut 00:42:51.946 LINK ctrlr_discovery_ut 00:42:52.205 LINK ctrlr_ut 00:42:52.464 LINK ctrlr_bdev_ut 00:42:52.723 LINK subsystem_ut 00:42:57.988 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:42:58.946 LINK dev_ut 00:42:59.204 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:42:59.772 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:43:00.337 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:43:00.596 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:43:00.596 LINK nvme_ctrlr_cmd_ut 00:43:00.596 LINK nvmf_ut 00:43:01.165 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:43:01.165 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:43:02.102 LINK lun_ut 00:43:02.102 LINK nvme_ctrlr_ocssd_cmd_ut 00:43:02.362 LINK transport_ut 00:43:02.621 LINK rdma_ut 00:43:02.621 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:43:04.000 CC test/unit/lib/sock/sock.c/sock_ut.o 00:43:04.570 LINK nvme_ns_ut 00:43:04.570 CC test/unit/lib/thread/thread.c/thread_ut.o 00:43:04.829 CC test/unit/lib/util/base64.c/base64_ut.o 00:43:05.397 LINK base64_ut 00:43:05.656 LINK sock_ut 00:43:06.228 LINK thread_ut 00:43:06.488 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:43:06.747 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:43:06.747 LINK scsi_ut 00:43:06.747 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:43:06.747 LINK pci_event_ut 00:43:07.315 LINK bit_array_ut 00:43:07.575 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:43:07.834 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:43:08.094 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:43:08.353 LINK cpuset_ut 00:43:08.353 LINK scsi_bdev_ut 00:43:08.353 CC test/unit/lib/sock/posix.c/posix_ut.o 00:43:08.643 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:43:08.643 LINK nvme_ns_cmd_ut 00:43:08.643 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:43:08.933 LINK subsystem_ut 00:43:08.933 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:43:08.933 LINK posix_ut 00:43:08.933 LINK iobuf_ut 00:43:09.192 LINK scsi_pr_ut 00:43:09.451 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:43:09.710 LINK crc16_ut 00:43:09.969 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:43:10.906 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:43:10.906 LINK crc32_ieee_ut 00:43:10.906 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:43:10.906 LINK nvme_ns_ocssd_cmd_ut 00:43:11.165 LINK crc32c_ut 00:43:11.165 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:43:11.165 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:43:11.165 LINK crc64_ut 00:43:11.165 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:43:11.424 LINK rpc_ut 00:43:11.424 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:43:11.424 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:43:11.684 CC test/unit/lib/util/dif.c/dif_ut.o 00:43:11.684 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:43:11.684 CC test/unit/lib/util/iov.c/iov_ut.o 00:43:11.684 LINK idxd_user_ut 00:43:11.684 LINK iov_ut 00:43:11.942 LINK idxd_ut 00:43:12.201 LINK nvme_pcie_ut 00:43:12.201 LINK dif_ut 00:43:12.201 LINK nvme_poll_group_ut 00:43:12.461 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:43:12.720 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:43:13.289 CC test/unit/lib/rdma/common.c/common_ut.o 00:43:13.289 LINK nvme_qpair_ut 00:43:13.289 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:43:13.289 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:43:13.548 LINK common_ut 00:43:13.548 CC test/unit/lib/util/math.c/math_ut.o 00:43:13.808 LINK ftl_l2p_ut 00:43:13.808 LINK math_ut 00:43:13.808 LINK nvme_quirks_ut 00:43:13.808 LINK vhost_ut 00:43:14.068 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:43:14.327 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:43:14.327 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:43:14.327 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:43:14.586 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:43:14.586 LINK pipe_ut 00:43:14.845 LINK nvme_transport_ut 00:43:14.845 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:43:14.845 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:43:15.103 LINK nvme_tcp_ut 00:43:15.103 LINK ftl_band_ut 00:43:15.103 LINK nvme_io_msg_ut 00:43:15.671 LINK nvme_fabric_ut 00:43:15.671 CC test/unit/lib/util/string.c/string_ut.o 00:43:15.671 LINK nvme_pcie_common_ut 00:43:15.930 LINK string_ut 00:43:16.497 CC test/unit/lib/util/xor.c/xor_ut.o 00:43:16.756 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:43:16.756 LINK xor_ut 00:43:17.015 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:43:17.274 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:43:17.274 LINK nvme_opal_ut 00:43:17.533 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:43:17.533 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:43:17.533 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:43:17.533 LINK ftl_bitmap_ut 00:43:17.791 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:43:17.791 LINK ftl_mempool_ut 00:43:17.791 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:43:17.791 LINK ftl_io_ut 00:43:17.791 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:43:17.791 LINK nvme_cuse_ut 00:43:18.049 LINK nvme_rdma_ut 00:43:18.049 LINK ftl_mngt_ut 00:43:18.307 LINK ftl_sb_ut 00:43:18.566 LINK ftl_layout_upgrade_ut 00:43:50.648 json_parse_ut.c: In function ‘test_parse_nesting’: 00:43:50.648 json_parse_ut.c:616:1: note: variable tracking size limit exceeded with ‘-fvar-tracking-assignments’, retrying without 00:43:50.648 616 | test_parse_nesting(void) 00:43:50.648 | ^ 00:43:50.648 06:36:17 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:43:50.648 make[1]: Nothing to be done for 'clean'. 00:43:51.240 06:36:21 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:43:51.240 06:36:21 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:43:51.240 06:36:21 -- common/autotest_common.sh@10 -- $ set +x 00:43:51.240 06:36:21 -- spdk/autopackage.sh@48 -- $ timing_finish 00:43:51.240 06:36:21 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:43:51.240 06:36:21 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:43:51.240 06:36:21 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:51.240 + [[ -n 2112 ]] 00:43:51.240 + sudo kill 2112 00:43:51.250 [Pipeline] } 00:43:51.269 [Pipeline] // timeout 00:43:51.274 [Pipeline] } 00:43:51.291 [Pipeline] // stage 00:43:51.298 [Pipeline] } 00:43:51.316 [Pipeline] // catchError 00:43:51.326 [Pipeline] stage 00:43:51.329 [Pipeline] { (Stop VM) 00:43:51.347 [Pipeline] sh 00:43:51.633 + vagrant halt 00:43:54.926 ==> default: Halting domain... 00:44:04.915 [Pipeline] sh 00:44:05.195 + vagrant destroy -f 00:44:08.481 ==> default: Removing domain... 00:44:09.060 [Pipeline] sh 00:44:09.339 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:44:09.349 [Pipeline] } 00:44:09.367 [Pipeline] // stage 00:44:09.373 [Pipeline] } 00:44:09.391 [Pipeline] // dir 00:44:09.398 [Pipeline] } 00:44:09.415 [Pipeline] // wrap 00:44:09.422 [Pipeline] } 00:44:09.440 [Pipeline] // catchError 00:44:09.452 [Pipeline] stage 00:44:09.455 [Pipeline] { (Epilogue) 00:44:09.469 [Pipeline] sh 00:44:09.751 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:27.915 [Pipeline] catchError 00:44:27.917 [Pipeline] { 00:44:27.933 [Pipeline] sh 00:44:28.213 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:28.471 Artifacts sizes are good 00:44:28.479 [Pipeline] } 00:44:28.496 [Pipeline] // catchError 00:44:28.507 [Pipeline] archiveArtifacts 00:44:28.514 Archiving artifacts 00:44:28.847 [Pipeline] cleanWs 00:44:28.858 [WS-CLEANUP] Deleting project workspace... 00:44:28.858 [WS-CLEANUP] Deferred wipeout is used... 00:44:28.864 [WS-CLEANUP] done 00:44:28.866 [Pipeline] } 00:44:28.884 [Pipeline] // stage 00:44:28.890 [Pipeline] } 00:44:28.906 [Pipeline] // node 00:44:28.911 [Pipeline] End of Pipeline 00:44:28.951 Finished: SUCCESS